Test Report: Hyper-V_Windows 18779

                    
                      c20b56ce109690ce92fd9e26e987f9b16f237ff0:2024-05-01:34278
                    
                

Test fail (18/201)

x
+
TestAddons/parallel/Registry (72.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 20.8654ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-56vl8" [8f7e03d5-5db3-4ed8-95e9-8472acc1061c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0125064s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jnrbf" [eb5876d4-d74b-4b9a-a081-bf0997fc06b4] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0135346s
addons_test.go:340: (dbg) Run:  kubectl --context addons-286100 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-286100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-286100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.7608107s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 ip: (2.8238629s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0501 02:16:50.090997    7272 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-286100 ip"
2024/05/01 02:16:52 [DEBUG] GET http://172.28.215.237:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable registry --alsologtostderr -v=1: (16.3389689s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-286100 -n addons-286100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-286100 -n addons-286100: (13.2123316s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 logs -n 25: (9.9237499s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-146700 | minikube6\jenkins | v1.33.0 | 01 May 24 02:08 UTC |                     |
	|         | -p download-only-146700                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| delete  | -p download-only-146700                                                                     | download-only-146700 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| start   | -o=json --download-only                                                                     | download-only-379800 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC |                     |
	|         | -p download-only-379800                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| delete  | -p download-only-379800                                                                     | download-only-379800 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| delete  | -p download-only-146700                                                                     | download-only-146700 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| delete  | -p download-only-379800                                                                     | download-only-379800 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-376000 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC |                     |
	|         | binary-mirror-376000                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:59966                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-376000                                                                     | binary-mirror-376000 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| addons  | enable dashboard -p                                                                         | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC |                     |
	|         | addons-286100                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC |                     |
	|         | addons-286100                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-286100 --wait=true                                                                | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:16 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-286100 addons                                                                        | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:16 UTC | 01 May 24 02:16 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-286100 ssh cat                                                                       | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:16 UTC | 01 May 24 02:16 UTC |
	|         | /opt/local-path-provisioner/pvc-0464956f-1861-4caa-83a8-1de4d13a8aba_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-286100 ip                                                                            | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:16 UTC | 01 May 24 02:16 UTC |
	| addons  | addons-286100 addons disable                                                                | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:16 UTC | 01 May 24 02:17 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-286100 addons disable                                                                | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:17 UTC | 01 May 24 02:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:17 UTC |                     |
	|         | addons-286100                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-286100 addons disable                                                                | addons-286100        | minikube6\jenkins | v1.33.0 | 01 May 24 02:17 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:09:41
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:09:41.160244    9736 out.go:291] Setting OutFile to fd 944 ...
	I0501 02:09:41.160904    9736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:09:41.160904    9736 out.go:304] Setting ErrFile to fd 948...
	I0501 02:09:41.160904    9736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:09:41.181407    9736 out.go:298] Setting JSON to false
	I0501 02:09:41.184404    9736 start.go:129] hostinfo: {"hostname":"minikube6","uptime":102435,"bootTime":1714426945,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:09:41.185408    9736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:09:41.193018    9736 out.go:177] * [addons-286100] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:09:41.197871    9736 notify.go:220] Checking for updates...
	I0501 02:09:41.200344    9736 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:09:41.202528    9736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:09:41.205514    9736 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:09:41.207630    9736 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:09:41.210279    9736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:09:41.213417    9736 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:09:46.845177    9736 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:09:46.849128    9736 start.go:297] selected driver: hyperv
	I0501 02:09:46.849296    9736 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:09:46.849296    9736 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:09:46.902477    9736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:09:46.904782    9736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:09:46.904990    9736 cni.go:84] Creating CNI manager for ""
	I0501 02:09:46.904990    9736 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:09:46.904990    9736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:09:46.904990    9736 start.go:340] cluster config:
	{Name:addons-286100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-286100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:09:46.904990    9736 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:09:46.909791    9736 out.go:177] * Starting "addons-286100" primary control-plane node in "addons-286100" cluster
	I0501 02:09:46.912194    9736 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:09:46.912852    9736 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:09:46.912852    9736 cache.go:56] Caching tarball of preloaded images
	I0501 02:09:46.912852    9736 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:09:46.912852    9736 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:09:46.913573    9736 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\config.json ...
	I0501 02:09:46.914093    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\config.json: {Name:mk3e0afa153ddb03964e1da647ee38a6ab3daea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:09:46.915750    9736 start.go:360] acquireMachinesLock for addons-286100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:09:46.916010    9736 start.go:364] duration metric: took 260.1µs to acquireMachinesLock for "addons-286100"
	I0501 02:09:46.916010    9736 start.go:93] Provisioning new machine with config: &{Name:addons-286100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:addons-286100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:09:46.916010    9736 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:09:46.920207    9736 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0501 02:09:46.920656    9736 start.go:159] libmachine.API.Create for "addons-286100" (driver="hyperv")
	I0501 02:09:46.920656    9736 client.go:168] LocalClient.Create starting
	I0501 02:09:46.921099    9736 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:09:47.277022    9736 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:09:47.426679    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:09:49.935488    9736 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:09:49.936345    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:09:49.936345    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:09:51.767457    9736 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:09:51.767457    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:09:51.767457    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:09:53.332157    9736 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:09:53.332231    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:09:53.332299    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:09:57.264580    9736 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:09:57.264580    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:09:57.267645    9736 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:09:57.809938    9736 main.go:141] libmachine: Creating SSH key...
	I0501 02:09:58.112310    9736 main.go:141] libmachine: Creating VM...
	I0501 02:09:58.112458    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:10:01.068571    9736 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:10:01.069319    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:01.069319    9736 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:10:01.069462    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:10:03.003769    9736 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:10:03.004632    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:03.004632    9736 main.go:141] libmachine: Creating VHD
	I0501 02:10:03.004781    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:10:06.834982    9736 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 37AF31C7-7F0C-4407-98D1-BDEE4F268719
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:10:06.834982    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:06.834982    9736 main.go:141] libmachine: Writing magic tar header
	I0501 02:10:06.834982    9736 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:10:06.848220    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:10:10.064324    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:10.064388    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:10.064388    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\disk.vhd' -SizeBytes 20000MB
	I0501 02:10:12.574575    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:12.574796    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:12.574796    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-286100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0501 02:10:16.343327    9736 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-286100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:10:16.343327    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:16.343477    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-286100 -DynamicMemoryEnabled $false
	I0501 02:10:18.626461    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:18.626461    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:18.627504    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-286100 -Count 2
	I0501 02:10:20.837314    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:20.837428    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:20.837536    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-286100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\boot2docker.iso'
	I0501 02:10:23.429163    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:23.429248    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:23.429374    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-286100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\disk.vhd'
	I0501 02:10:26.141003    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:26.141631    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:26.141631    9736 main.go:141] libmachine: Starting VM...
	I0501 02:10:26.141631    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-286100
	I0501 02:10:29.468079    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:29.468268    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:29.468268    9736 main.go:141] libmachine: Waiting for host to start...
	I0501 02:10:29.468320    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:10:31.733077    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:10:31.733561    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:31.733619    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:10:34.274535    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:34.274599    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:35.287435    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:10:37.508263    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:10:37.508263    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:37.509290    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:10:40.106286    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:40.106286    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:41.109975    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:10:43.301761    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:10:43.301761    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:43.302464    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:10:45.846461    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:45.846461    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:46.853635    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:10:49.077218    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:10:49.077218    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:49.077218    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:10:51.628148    9736 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:10:51.628148    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:52.637078    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:10:54.811899    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:10:54.812100    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:54.812378    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:10:57.518429    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:10:57.518429    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:57.518960    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:10:59.692470    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:10:59.692680    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:10:59.692754    9736 machine.go:94] provisionDockerMachine start ...
	I0501 02:10:59.692754    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:01.891364    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:01.892021    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:01.892159    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:04.541109    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:04.541342    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:04.547591    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:11:04.557597    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:11:04.558630    9736 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:11:04.707770    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:11:04.707998    9736 buildroot.go:166] provisioning hostname "addons-286100"
	I0501 02:11:04.708112    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:06.846175    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:06.846175    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:06.846175    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:09.488003    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:09.488918    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:09.498057    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:11:09.498998    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:11:09.498998    9736 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-286100 && echo "addons-286100" | sudo tee /etc/hostname
	I0501 02:11:09.665155    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-286100
	
	I0501 02:11:09.665324    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:11.807653    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:11.808796    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:11.808823    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:14.449071    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:14.449071    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:14.455988    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:11:14.456678    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:11:14.456678    9736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-286100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-286100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-286100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:11:14.624212    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:11:14.624380    9736 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:11:14.624516    9736 buildroot.go:174] setting up certificates
	I0501 02:11:14.624577    9736 provision.go:84] configureAuth start
	I0501 02:11:14.624655    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:16.821660    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:16.822582    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:16.822582    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:19.407508    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:19.407508    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:19.408337    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:21.541070    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:21.541070    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:21.541170    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:24.139321    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:24.140320    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:24.140320    9736 provision.go:143] copyHostCerts
	I0501 02:11:24.141108    9736 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:11:24.142329    9736 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:11:24.144189    9736 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:11:24.145446    9736 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-286100 san=[127.0.0.1 172.28.215.237 addons-286100 localhost minikube]
	I0501 02:11:24.294614    9736 provision.go:177] copyRemoteCerts
	I0501 02:11:24.307710    9736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:11:24.307710    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:26.437226    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:26.437426    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:26.437511    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:29.023252    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:29.023984    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:29.024712    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:11:29.133366    9736 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8256209s)
	I0501 02:11:29.134083    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:11:29.187034    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:11:29.235816    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:11:29.290285    9736 provision.go:87] duration metric: took 14.665601s to configureAuth
	I0501 02:11:29.290285    9736 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:11:29.290976    9736 config.go:182] Loaded profile config "addons-286100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:11:29.290976    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:31.429353    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:31.429999    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:31.430059    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:34.040611    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:34.040611    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:34.050039    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:11:34.050039    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:11:34.050039    9736 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:11:34.196511    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:11:34.196511    9736 buildroot.go:70] root file system type: tmpfs
	I0501 02:11:34.196511    9736 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:11:34.197055    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:36.356312    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:36.356926    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:36.357046    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:38.983658    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:38.983658    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:38.991739    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:11:38.992319    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:11:38.992484    9736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:11:39.157020    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:11:39.157020    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:41.289664    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:41.289799    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:41.289799    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:43.904197    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:43.904197    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:43.913587    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:11:43.914700    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:11:43.914700    9736 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:11:46.187228    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:11:46.187228    9736 machine.go:97] duration metric: took 46.4941353s to provisionDockerMachine
	I0501 02:11:46.187228    9736 client.go:171] duration metric: took 1m59.2657013s to LocalClient.Create
	I0501 02:11:46.187228    9736 start.go:167] duration metric: took 1m59.2657013s to libmachine.API.Create "addons-286100"
	I0501 02:11:46.187228    9736 start.go:293] postStartSetup for "addons-286100" (driver="hyperv")
	I0501 02:11:46.187228    9736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:11:46.201604    9736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:11:46.202610    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:48.388234    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:48.388710    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:48.388820    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:51.012208    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:51.013048    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:51.013779    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:11:51.135346    9736 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9337052s)
	I0501 02:11:51.149648    9736 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:11:51.158180    9736 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:11:51.158281    9736 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:11:51.158422    9736 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:11:51.159062    9736 start.go:296] duration metric: took 4.9717972s for postStartSetup
	I0501 02:11:51.161661    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:53.358278    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:53.358278    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:53.358542    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:11:56.013929    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:11:56.014119    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:56.014291    9736 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\config.json ...
	I0501 02:11:56.017842    9736 start.go:128] duration metric: took 2m9.10089s to createHost
	I0501 02:11:56.018011    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:11:58.211094    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:11:58.211236    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:11:58.211236    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:12:00.821745    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:12:00.822030    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:00.833386    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:12:00.833760    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:12:00.833760    9736 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:12:00.971308    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714529520.965869878
	
	I0501 02:12:00.971308    9736 fix.go:216] guest clock: 1714529520.965869878
	I0501 02:12:00.971448    9736 fix.go:229] Guest: 2024-05-01 02:12:00.965869878 +0000 UTC Remote: 2024-05-01 02:11:56.0179224 +0000 UTC m=+135.069715901 (delta=4.947947478s)
	I0501 02:12:00.971448    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:12:03.154957    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:12:03.155850    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:03.156102    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:12:05.781446    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:12:05.781446    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:05.788695    9736 main.go:141] libmachine: Using SSH client type: native
	I0501 02:12:05.790029    9736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.237 22 <nil> <nil>}
	I0501 02:12:05.790029    9736 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714529520
	I0501 02:12:05.936730    9736 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:12:00 UTC 2024
	
	I0501 02:12:05.938024    9736 fix.go:236] clock set: Wed May  1 02:12:00 UTC 2024
	 (err=<nil>)
	I0501 02:12:05.938051    9736 start.go:83] releasing machines lock for "addons-286100", held for 2m19.0210262s
	I0501 02:12:05.938051    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:12:08.094825    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:12:08.095488    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:08.095586    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:12:10.698630    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:12:10.698799    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:10.703066    9736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:12:10.703066    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:12:10.718387    9736 ssh_runner.go:195] Run: cat /version.json
	I0501 02:12:10.718387    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:12:12.900378    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:12:12.900378    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:12.900378    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:12:12.900378    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:12:12.900378    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:12.900378    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:12:15.597559    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:12:15.597559    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:15.598283    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:12:15.625446    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:12:15.625446    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:12:15.625975    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:12:15.702892    9736 ssh_runner.go:235] Completed: cat /version.json: (4.9844692s)
	I0501 02:12:15.717253    9736 ssh_runner.go:195] Run: systemctl --version
	I0501 02:12:15.788784    9736 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0856806s)
	I0501 02:12:15.802450    9736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:12:15.814367    9736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:12:15.834146    9736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:12:15.870348    9736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:12:15.870424    9736 start.go:494] detecting cgroup driver to use...
	I0501 02:12:15.871139    9736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:12:15.924861    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:12:15.962033    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:12:15.987632    9736 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:12:16.004121    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:12:16.044454    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:12:16.080286    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:12:16.117302    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:12:16.159270    9736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:12:16.196371    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:12:16.236912    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:12:16.275261    9736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:12:16.314709    9736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:12:16.348810    9736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:12:16.383810    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:12:16.609713    9736 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:12:16.643616    9736 start.go:494] detecting cgroup driver to use...
	I0501 02:12:16.660739    9736 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:12:16.701413    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:12:16.742963    9736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:12:16.792877    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:12:16.835108    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:12:16.876150    9736 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:12:16.949655    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:12:16.979034    9736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:12:17.035635    9736 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:12:17.057173    9736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:12:17.078349    9736 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:12:17.132007    9736 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:12:17.362637    9736 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:12:17.569750    9736 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:12:17.569750    9736 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:12:17.620108    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:12:17.850093    9736 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:12:20.422390    9736 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5712675s)
	I0501 02:12:20.438389    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:12:20.482624    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:12:20.524166    9736 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:12:20.751641    9736 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:12:20.965927    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:12:21.181567    9736 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:12:21.231760    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:12:21.272046    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:12:21.494955    9736 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:12:21.618022    9736 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:12:21.634066    9736 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:12:21.643831    9736 start.go:562] Will wait 60s for crictl version
	I0501 02:12:21.659075    9736 ssh_runner.go:195] Run: which crictl
	I0501 02:12:21.680458    9736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:12:21.740989    9736 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:12:21.752444    9736 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:12:21.802751    9736 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:12:21.843345    9736 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:12:21.843420    9736 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:12:21.848824    9736 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:12:21.848824    9736 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:12:21.848824    9736 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:12:21.848824    9736 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:12:21.851707    9736 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:12:21.852619    9736 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:12:21.865493    9736 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:12:21.873542    9736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:12:21.897683    9736 kubeadm.go:877] updating cluster {Name:addons-286100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:addons-286100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.215.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:12:21.898113    9736 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:12:21.909464    9736 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:12:21.936546    9736 docker.go:685] Got preloaded images: 
	I0501 02:12:21.936696    9736 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:12:21.951051    9736 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:12:21.986864    9736 ssh_runner.go:195] Run: which lz4
	I0501 02:12:22.009779    9736 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:12:22.016652    9736 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:12:22.016848    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:12:24.006835    9736 docker.go:649] duration metric: took 2.0126514s to copy over tarball
	I0501 02:12:24.022489    9736 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:12:29.284210    9736 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.2616834s)
	I0501 02:12:29.284210    9736 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:12:29.358809    9736 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:12:29.383015    9736 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:12:29.441829    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:12:29.662162    9736 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:12:35.346449    9736 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6842456s)
	I0501 02:12:35.357892    9736 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:12:35.388042    9736 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:12:35.388042    9736 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:12:35.388165    9736 kubeadm.go:928] updating node { 172.28.215.237 8443 v1.30.0 docker true true} ...
	I0501 02:12:35.388379    9736 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-286100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.215.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-286100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:12:35.399132    9736 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:12:35.437631    9736 cni.go:84] Creating CNI manager for ""
	I0501 02:12:35.437631    9736 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:12:35.437631    9736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:12:35.437631    9736 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.215.237 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-286100 NodeName:addons-286100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.215.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.215.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:12:35.438044    9736 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.215.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-286100"
	  kubeletExtraArgs:
	    node-ip: 172.28.215.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.215.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:12:35.452309    9736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:12:35.472590    9736 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:12:35.487297    9736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:12:35.510923    9736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:12:35.546012    9736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:12:35.579795    9736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0501 02:12:35.630069    9736 ssh_runner.go:195] Run: grep 172.28.215.237	control-plane.minikube.internal$ /etc/hosts
	I0501 02:12:35.638028    9736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.215.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:12:35.674478    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:12:35.894700    9736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:12:35.927000    9736 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100 for IP: 172.28.215.237
	I0501 02:12:35.927118    9736 certs.go:194] generating shared ca certs ...
	I0501 02:12:35.927179    9736 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:35.927603    9736 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:12:36.101713    9736 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0501 02:12:36.101713    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.102752    9736 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0501 02:12:36.102752    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.103717    9736 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:12:36.415654    9736 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0501 02:12:36.415654    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.416605    9736 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0501 02:12:36.416605    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.417610    9736 certs.go:256] generating profile certs ...
	I0501 02:12:36.418498    9736 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.key
	I0501 02:12:36.418498    9736 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt with IP's: []
	I0501 02:12:36.714925    9736 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt ...
	I0501 02:12:36.715888    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: {Name:mk619d19245fd9c990c30e0eb336dc618525e402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.717161    9736 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.key ...
	I0501 02:12:36.717161    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.key: {Name:mk6dd27105867716c4e1d258bbd7ecb74aeabc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.718166    9736 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.key.384105ed
	I0501 02:12:36.718166    9736 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.crt.384105ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.215.237]
	I0501 02:12:36.836752    9736 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.crt.384105ed ...
	I0501 02:12:36.836752    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.crt.384105ed: {Name:mk09505d071485479f022dfcd44b71804d6d0665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.837750    9736 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.key.384105ed ...
	I0501 02:12:36.837750    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.key.384105ed: {Name:mke16985cc649beab0a4406e073355a73a88d1a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.838495    9736 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.crt.384105ed -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.crt
	I0501 02:12:36.852148    9736 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.key.384105ed -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.key
	I0501 02:12:36.853145    9736 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.key
	I0501 02:12:36.853786    9736 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.crt with IP's: []
	I0501 02:12:36.978145    9736 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.crt ...
	I0501 02:12:36.978145    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.crt: {Name:mkde6ab81702ee6f050bd239f32732c091ad9e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.979142    9736 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.key ...
	I0501 02:12:36.979142    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.key: {Name:mk6f7686e03419915a7c4262be7c9f2b49426b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:12:36.993143    9736 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:12:36.994138    9736 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:12:36.994828    9736 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:12:36.995157    9736 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:12:36.996148    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:12:37.061188    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:12:37.114444    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:12:37.168972    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:12:37.222708    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0501 02:12:37.275214    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:12:37.334209    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:12:37.388567    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:12:37.431732    9736 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:12:37.480574    9736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:12:37.533632    9736 ssh_runner.go:195] Run: openssl version
	I0501 02:12:37.559718    9736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:12:37.597184    9736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:12:37.605306    9736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:12:37.620394    9736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:12:37.644441    9736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:12:37.683255    9736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:12:37.691307    9736 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:12:37.691723    9736 kubeadm.go:391] StartCluster: {Name:addons-286100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:addons-286100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.215.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:12:37.702774    9736 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:12:37.747292    9736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:12:37.784721    9736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:12:37.820463    9736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:12:37.842189    9736 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:12:37.842189    9736 kubeadm.go:156] found existing configuration files:
	
	I0501 02:12:37.858608    9736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:12:37.880090    9736 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:12:37.893764    9736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:12:37.930208    9736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:12:37.950218    9736 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:12:37.963234    9736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:12:37.999436    9736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:12:38.020436    9736 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:12:38.034852    9736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:12:38.071197    9736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:12:38.095241    9736 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:12:38.110316    9736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:12:38.130970    9736 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:12:38.398459    9736 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:12:52.291907    9736 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:12:52.291907    9736 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:12:52.292126    9736 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:12:52.292486    9736 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:12:52.292643    9736 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:12:52.292848    9736 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:12:52.297044    9736 out.go:204]   - Generating certificates and keys ...
	I0501 02:12:52.297180    9736 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:12:52.297363    9736 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:12:52.297363    9736 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:12:52.297363    9736 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:12:52.297999    9736 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:12:52.298382    9736 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:12:52.298441    9736 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:12:52.298441    9736 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-286100 localhost] and IPs [172.28.215.237 127.0.0.1 ::1]
	I0501 02:12:52.298984    9736 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:12:52.299270    9736 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-286100 localhost] and IPs [172.28.215.237 127.0.0.1 ::1]
	I0501 02:12:52.299330    9736 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:12:52.299330    9736 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:12:52.299330    9736 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:12:52.299330    9736 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:12:52.299922    9736 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:12:52.300084    9736 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:12:52.300084    9736 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:12:52.300084    9736 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:12:52.300084    9736 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:12:52.300084    9736 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:12:52.300818    9736 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:12:52.304097    9736 out.go:204]   - Booting up control plane ...
	I0501 02:12:52.304097    9736 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:12:52.305177    9736 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:12:52.305463    9736 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:12:52.305761    9736 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:12:52.305761    9736 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:12:52.306344    9736 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:12:52.306344    9736 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:12:52.306344    9736 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:12:52.306344    9736 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.694676ms
	I0501 02:12:52.307079    9736 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:12:52.307079    9736 kubeadm.go:309] [api-check] The API server is healthy after 7.503148969s
	I0501 02:12:52.307433    9736 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:12:52.307805    9736 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:12:52.307805    9736 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:12:52.308244    9736 kubeadm.go:309] [mark-control-plane] Marking the node addons-286100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:12:52.308244    9736 kubeadm.go:309] [bootstrap-token] Using token: xu20vm.6arw4ukzpmgj1kvx
	I0501 02:12:52.311894    9736 out.go:204]   - Configuring RBAC rules ...
	I0501 02:12:52.311894    9736 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:12:52.311894    9736 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:12:52.312857    9736 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:12:52.312857    9736 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:12:52.312857    9736 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:12:52.312857    9736 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:12:52.312857    9736 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:12:52.312857    9736 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:12:52.313962    9736 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:12:52.313962    9736 kubeadm.go:309] 
	I0501 02:12:52.314136    9736 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:12:52.314171    9736 kubeadm.go:309] 
	I0501 02:12:52.314335    9736 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:12:52.314335    9736 kubeadm.go:309] 
	I0501 02:12:52.314335    9736 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:12:52.314618    9736 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:12:52.314653    9736 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:12:52.314653    9736 kubeadm.go:309] 
	I0501 02:12:52.314895    9736 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:12:52.314930    9736 kubeadm.go:309] 
	I0501 02:12:52.315143    9736 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:12:52.315143    9736 kubeadm.go:309] 
	I0501 02:12:52.315143    9736 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:12:52.315542    9736 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:12:52.315710    9736 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:12:52.315749    9736 kubeadm.go:309] 
	I0501 02:12:52.315898    9736 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:12:52.316151    9736 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:12:52.316151    9736 kubeadm.go:309] 
	I0501 02:12:52.316352    9736 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xu20vm.6arw4ukzpmgj1kvx \
	I0501 02:12:52.316488    9736 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:12:52.316488    9736 kubeadm.go:309] 	--control-plane 
	I0501 02:12:52.316488    9736 kubeadm.go:309] 
	I0501 02:12:52.316876    9736 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:12:52.316933    9736 kubeadm.go:309] 
	I0501 02:12:52.317079    9736 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xu20vm.6arw4ukzpmgj1kvx \
	I0501 02:12:52.317287    9736 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:12:52.317287    9736 cni.go:84] Creating CNI manager for ""
	I0501 02:12:52.317287    9736 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:12:52.319810    9736 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:12:52.338383    9736 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:12:52.361239    9736 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:12:52.398069    9736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:12:52.414490    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:52.415546    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-286100 minikube.k8s.io/updated_at=2024_05_01T02_12_52_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=addons-286100 minikube.k8s.io/primary=true
	I0501 02:12:52.424616    9736 ops.go:34] apiserver oom_adj: -16
	I0501 02:12:52.641900    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:53.138532    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:53.643525    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:54.151544    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:54.650422    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:55.141368    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:55.644935    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:56.149605    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:56.653574    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:57.141387    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:57.643733    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:58.143377    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:58.646485    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:59.139809    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:12:59.641419    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:00.139450    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:00.643866    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:01.148844    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:01.639070    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:02.144063    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:02.645252    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:03.150130    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:03.641198    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:04.141251    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:04.653273    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:05.138981    9736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:13:05.280303    9736 kubeadm.go:1107] duration metric: took 12.8819894s to wait for elevateKubeSystemPrivileges
	W0501 02:13:05.280440    9736 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:13:05.280440    9736 kubeadm.go:393] duration metric: took 27.5886139s to StartCluster
	I0501 02:13:05.280440    9736 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:13:05.280440    9736 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:13:05.281718    9736 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:13:05.283412    9736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:13:05.283711    9736 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.215.237 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:13:05.283711    9736 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0501 02:13:05.286930    9736 out.go:177] * Verifying Kubernetes components...
	I0501 02:13:05.283920    9736 addons.go:69] Setting helm-tiller=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting cloud-spanner=true in profile "addons-286100"
	I0501 02:13:05.287129    9736 addons.go:234] Setting addon cloud-spanner=true in "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-286100"
	I0501 02:13:05.287282    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.283920    9736 addons.go:69] Setting default-storageclass=true in profile "addons-286100"
	I0501 02:13:05.287375    9736 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting yakd=true in profile "addons-286100"
	I0501 02:13:05.287432    9736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-286100"
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon yakd=true in "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting gcp-auth=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting registry=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting ingress-dns=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting ingress=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting storage-provisioner=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting volumesnapshots=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting inspektor-gadget=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting metrics-server=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-286100"
	I0501 02:13:05.283920    9736 config.go:182] Loaded profile config "addons-286100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:13:05.287024    9736 addons.go:234] Setting addon helm-tiller=true in "addons-286100"
	I0501 02:13:05.287432    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon registry=true in "addons-286100"
	I0501 02:13:05.290575    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.291265    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.291265    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.287432    9736 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-286100"
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon ingress-dns=true in "addons-286100"
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon ingress=true in "addons-286100"
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon storage-provisioner=true in "addons-286100"
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon metrics-server=true in "addons-286100"
	I0501 02:13:05.294066    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon volumesnapshots=true in "addons-286100"
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon inspektor-gadget=true in "addons-286100"
	I0501 02:13:05.294066    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.294066    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.287432    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.287432    9736 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-286100"
	I0501 02:13:05.295091    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.287432    9736 mustload.go:65] Loading cluster: addons-286100
	I0501 02:13:05.288617    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.288617    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.290575    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.294066    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.294066    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.294066    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:05.295091    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.296082    9736 config.go:182] Loaded profile config "addons-286100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:13:05.298090    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.299149    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.304506    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.309641    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.312629    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.313686    9736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:13:05.314637    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.316638    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.316638    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.317636    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:05.343177    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:07.420640    9736 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.1371667s)
	I0501 02:13:07.421637    9736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:13:07.421637    9736 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (2.1079358s)
	I0501 02:13:07.445620    9736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:13:09.413576    9736 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.9919244s)
	I0501 02:13:09.413576    9736 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:13:09.422582    9736 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.9769478s)
	I0501 02:13:09.426584    9736 node_ready.go:35] waiting up to 6m0s for node "addons-286100" to be "Ready" ...
	I0501 02:13:09.636519    9736 node_ready.go:49] node "addons-286100" has status "Ready":"True"
	I0501 02:13:09.636519    9736 node_ready.go:38] duration metric: took 209.933ms for node "addons-286100" to be "Ready" ...
	I0501 02:13:09.636519    9736 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:13:09.683268    9736 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:10.173414    9736 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-286100" context rescaled to 1 replicas
	I0501 02:13:11.807863    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:11.950486    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:11.950486    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:11.958189    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0501 02:13:11.951464    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:11.956372    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:11.962178    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:11.962614    9736 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0501 02:13:11.962614    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0501 02:13:11.962614    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:11.967574    9736 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-286100"
	I0501 02:13:11.967574    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:11.967574    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:11.977871    9736 out.go:177]   - Using image docker.io/registry:2.8.3
	I0501 02:13:11.971238    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:11.997625    9736 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0501 02:13:12.009972    9736 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0501 02:13:12.010973    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0501 02:13:11.998998    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.010973    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.010973    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.029771    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0501 02:13:12.035735    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0501 02:13:12.021776    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.045167    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.052194    9736 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0501 02:13:12.064672    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0501 02:13:12.063404    9736 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0501 02:13:12.069409    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0501 02:13:12.069409    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.074635    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0501 02:13:12.091397    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0501 02:13:12.080405    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.098541    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.111964    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0501 02:13:12.117880    9736 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0501 02:13:12.165697    9736 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:13:12.158689    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0501 02:13:12.187677    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.194339    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.194339    9736 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0501 02:13:12.199404    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.201344    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.205333    9736 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0501 02:13:12.201344    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.201344    9736 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:13:12.201344    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.203336    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.208382    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.213704    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.217339    9736 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0501 02:13:12.214338    9736 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0501 02:13:12.214338    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0501 02:13:12.214338    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.228185    9736 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0501 02:13:12.232950    9736 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0501 02:13:12.236955    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0501 02:13:12.236955    9736 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0501 02:13:12.236955    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0501 02:13:12.240955    9736 addons.go:234] Setting addon default-storageclass=true in "addons-286100"
	I0501 02:13:12.249967    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0501 02:13:12.249967    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:12.249967    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.251877    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0501 02:13:12.251877    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.251877    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.254884    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.263220    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.263220    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.268076    9736 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0501 02:13:12.264663    9736 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0501 02:13:12.264711    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.282258    9736 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0501 02:13:12.282258    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0501 02:13:12.282258    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.280969    9736 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0501 02:13:12.382997    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0501 02:13:12.384102    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.347381    9736 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0501 02:13:12.397785    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0501 02:13:12.397785    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.450915    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.450915    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.450915    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:12.653366    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.653366    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.663345    9736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:13:12.668335    9736 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:13:12.668335    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:13:12.668335    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:12.826992    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:12.826992    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:12.830350    9736 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0501 02:13:12.833356    9736 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0501 02:13:12.833356    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0501 02:13:12.833356    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:13.913444    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:16.753872    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:17.896719    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:17.896719    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:17.903760    9736 out.go:177]   - Using image docker.io/busybox:stable
	I0501 02:13:17.906711    9736 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0501 02:13:17.909718    9736 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0501 02:13:17.909718    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0501 02:13:17.909718    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:18.187441    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.187441    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.187441    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.240933    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.240933    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.240933    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.243550    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.243550    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.244480    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.501181    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.501181    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.501181    9736 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:13:18.501181    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:13:18.501181    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:18.506645    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.506645    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.506645    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.608687    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.608687    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.608687    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.693532    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.693532    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.693532    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.750972    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.750972    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.750972    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.854265    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:18.896357    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.896357    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.896357    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:18.898499    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:18.898499    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:18.930119    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:19.039533    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:19.039533    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:19.039533    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:19.832656    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:19.832656    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:19.832656    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:19.842652    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:19.842652    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:19.842652    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:20.325806    9736 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0501 02:13:20.325806    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:20.920127    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:23.230516    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:24.397026    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:24.397026    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:24.397026    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:24.715990    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:24.715990    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:24.716279    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:25.383309    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:25.383309    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:25.384439    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:25.545790    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:25.545790    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:25.547790    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:25.624747    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:25.624747    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:25.625768    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:25.679516    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:25.680517    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:25.680517    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:25.713030    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:25.808866    9736 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0501 02:13:25.808866    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0501 02:13:25.832163    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:25.832216    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:25.832900    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:25.943440    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:25.943440    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:25.944373    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:25.976870    9736 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0501 02:13:25.977512    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0501 02:13:26.027903    9736 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0501 02:13:26.027903    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0501 02:13:26.041300    9736 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0501 02:13:26.041300    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0501 02:13:26.160848    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:26.160848    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.161530    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:26.241337    9736 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0501 02:13:26.241337    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0501 02:13:26.247116    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:26.247116    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.247402    9736 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0501 02:13:26.247402    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0501 02:13:26.248146    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:26.251347    9736 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0501 02:13:26.251347    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0501 02:13:26.340379    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:26.340479    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.341543    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:26.362835    9736 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0501 02:13:26.362882    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0501 02:13:26.379636    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:26.379710    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.379785    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:26.422228    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0501 02:13:26.482104    9736 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0501 02:13:26.482104    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0501 02:13:26.485094    9736 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0501 02:13:26.485094    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0501 02:13:26.501123    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0501 02:13:26.511372    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0501 02:13:26.568635    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:26.568635    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.569296    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:26.648687    9736 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0501 02:13:26.648769    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0501 02:13:26.653263    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:26.653678    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.654512    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:26.710739    9736 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0501 02:13:26.710739    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0501 02:13:26.730366    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:26.730366    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:26.730881    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:26.812157    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0501 02:13:26.910365    9736 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0501 02:13:26.910365    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0501 02:13:26.923375    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0501 02:13:26.970204    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0501 02:13:26.970204    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0501 02:13:26.998176    9736 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0501 02:13:26.998176    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0501 02:13:27.032174    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0501 02:13:27.138176    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0501 02:13:27.208937    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0501 02:13:27.208937    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0501 02:13:27.243657    9736 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0501 02:13:27.243657    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0501 02:13:27.272197    9736 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0501 02:13:27.272197    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0501 02:13:27.326401    9736 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0501 02:13:27.326484    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0501 02:13:27.392095    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:13:27.406993    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0501 02:13:27.406993    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0501 02:13:27.577017    9736 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0501 02:13:27.577017    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0501 02:13:27.644765    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0501 02:13:27.655039    9736 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0501 02:13:27.655110    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0501 02:13:27.714440    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:27.735922    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0501 02:13:27.735922    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0501 02:13:27.874610    9736 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 02:13:27.874744    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0501 02:13:27.893809    9736 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0501 02:13:27.893809    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0501 02:13:28.076877    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.6546375s)
	I0501 02:13:28.250302    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:28.250302    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:28.252003    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:28.265057    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0501 02:13:28.265057    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0501 02:13:28.303635    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0501 02:13:28.335135    9736 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0501 02:13:28.335135    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0501 02:13:28.381933    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:28.381933    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:28.381933    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:28.576521    9736 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0501 02:13:28.576521    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0501 02:13:28.764545    9736 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:13:28.764545    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0501 02:13:29.182828    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:29.182828    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:29.184088    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:29.260210    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0501 02:13:29.338494    9736 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0501 02:13:29.338494    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0501 02:13:29.611342    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:13:29.699419    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:13:29.707880    9736 pod_ready.go:97] pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:29 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.28.215.237 HostIPs:[{IP:172.28.215
.237}] PodIP: PodIPs:[] StartTime:2024-05-01 02:13:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-01 02:13:18 +0000 UTC,FinishedAt:2024-05-01 02:13:28 +0000 UTC,ContainerID:docker://1c1f04cd26f996da457c51260136484ce5c6759121d243687bece6a97253263d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://1c1f04cd26f996da457c51260136484ce5c6759121d243687bece6a97253263d Started:0xc002d76000 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0501 02:13:29.707880    9736 pod_ready.go:81] duration metric: took 20.0234507s for pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace to be "Ready" ...
	E0501 02:13:29.707880    9736 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-4x5jj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:29 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:07 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:07 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:07 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-01 02:13:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.28.21
5.237 HostIPs:[{IP:172.28.215.237}] PodIP: PodIPs:[] StartTime:2024-05-01 02:13:07 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-01 02:13:18 +0000 UTC,FinishedAt:2024-05-01 02:13:28 +0000 UTC,ContainerID:docker://1c1f04cd26f996da457c51260136484ce5c6759121d243687bece6a97253263d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://1c1f04cd26f996da457c51260136484ce5c6759121d243687bece6a97253263d Started:0xc002d76000 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0501 02:13:29.707880    9736 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:29.741587    9736 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0501 02:13:29.741587    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0501 02:13:30.337430    9736 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0501 02:13:30.338442    9736 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0501 02:13:30.338442    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0501 02:13:30.892937    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.3815327s)
	I0501 02:13:30.895860    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.3947054s)
	I0501 02:13:30.895929    9736 addons.go:470] Verifying addon registry=true in "addons-286100"
	I0501 02:13:30.898806    9736 out.go:177] * Verifying registry addon...
	I0501 02:13:30.904157    9736 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0501 02:13:31.032326    9736 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0501 02:13:31.032326    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:31.143622    9736 addons.go:234] Setting addon gcp-auth=true in "addons-286100"
	I0501 02:13:31.143622    9736 host.go:66] Checking if "addons-286100" exists ...
	I0501 02:13:31.145009    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:31.293399    9736 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0501 02:13:31.293399    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0501 02:13:31.459820    9736 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0501 02:13:31.459820    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:31.876298    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:31.889722    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0501 02:13:31.968409    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:32.317234    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.5048261s)
	I0501 02:13:32.496052    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:32.938739    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:33.435784    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:33.627073    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:33.627073    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:33.643063    9736 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0501 02:13:33.643063    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-286100 ).state
	I0501 02:13:33.918717    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:34.227314    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:34.455269    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:35.225712    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:35.472780    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:35.945378    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:36.075840    9736 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:13:36.075840    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:36.075840    9736 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-286100 ).networkadapters[0]).ipaddresses[0]
	I0501 02:13:36.294594    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:36.480607    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:36.991715    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:37.604376    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:37.936929    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:38.338756    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:38.427163    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:38.967786    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:39.010145    9736 main.go:141] libmachine: [stdout =====>] : 172.28.215.237
	
	I0501 02:13:39.010520    9736 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:13:39.011508    9736 sshutil.go:53] new ssh client: &{IP:172.28.215.237 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-286100\id_rsa Username:docker}
	I0501 02:13:39.450109    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:39.929411    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:40.395087    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.3627552s)
	I0501 02:13:40.395147    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (13.2568742s)
	I0501 02:13:40.395227    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.003038s)
	I0501 02:13:40.397820    9736 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-286100 service yakd-dashboard -n yakd-dashboard
	
	I0501 02:13:40.395424    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.7505305s)
	I0501 02:13:40.395615    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.1353234s)
	I0501 02:13:40.395721    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.0918916s)
	I0501 02:13:40.395842    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.7844218s)
	I0501 02:13:40.395842    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.6963449s)
	I0501 02:13:40.396432    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.4719054s)
	I0501 02:13:40.410453    9736 addons.go:470] Verifying addon ingress=true in "addons-286100"
	I0501 02:13:40.410453    9736 addons.go:470] Verifying addon metrics-server=true in "addons-286100"
	W0501 02:13:40.410453    9736 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0501 02:13:40.424445    9736 out.go:177] * Verifying ingress addon...
	I0501 02:13:40.425451    9736 retry.go:31] will retry after 194.17988ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0501 02:13:40.439442    9736 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0501 02:13:40.476896    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:40.482446    9736 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0501 02:13:40.482446    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0501 02:13:40.508785    9736 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0501 02:13:40.657392    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0501 02:13:40.732599    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:40.920139    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:40.949970    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:41.424370    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:41.455385    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:41.924937    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:41.958243    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:42.433385    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:42.453671    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:42.771722    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:42.934463    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:42.952586    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:43.013165    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.1233096s)
	I0501 02:13:43.013249    9736 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-286100"
	I0501 02:13:43.013165    9736 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (9.3700335s)
	I0501 02:13:43.016416    9736 out.go:177] * Verifying csi-hostpath-driver addon...
	I0501 02:13:43.019909    9736 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0501 02:13:43.021921    9736 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0501 02:13:43.025917    9736 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0501 02:13:43.028916    9736 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0501 02:13:43.028916    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0501 02:13:43.068563    9736 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0501 02:13:43.068563    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:43.126746    9736 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0501 02:13:43.126746    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0501 02:13:43.340749    9736 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0501 02:13:43.340749    9736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0501 02:13:43.399387    9736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0501 02:13:43.425215    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:43.450615    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:43.536984    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:43.880189    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.2217594s)
	I0501 02:13:43.914415    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:43.958392    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:44.040535    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:44.422658    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:44.449041    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:44.547872    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:44.937848    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:44.970609    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:45.083939    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:45.267063    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:45.429170    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:45.502993    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:45.558547    9736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.1590498s)
	I0501 02:13:45.566938    9736 addons.go:470] Verifying addon gcp-auth=true in "addons-286100"
	I0501 02:13:45.572302    9736 out.go:177] * Verifying gcp-auth addon...
	I0501 02:13:45.577052    9736 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0501 02:13:45.605671    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:45.658046    9736 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0501 02:13:45.658046    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:45.915750    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:45.948609    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:46.045475    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:46.089121    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:46.424266    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:46.456845    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:46.537273    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:46.596304    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:46.917140    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:46.946395    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:47.043565    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:47.090737    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:47.425592    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:47.456141    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:47.536489    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:47.595996    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:47.726226    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:47.913400    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:47.961254    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:48.042936    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:48.087288    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:48.418086    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:48.448200    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:48.546639    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:48.589120    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:48.926948    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:48.957868    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:49.039681    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:49.084158    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:49.416670    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:49.448344    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:49.545986    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:49.587696    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:49.919950    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:49.951218    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:50.047826    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:50.092662    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:50.222943    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:50.424238    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:50.453845    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:50.534838    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:50.594790    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:50.914414    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:50.959672    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:51.039733    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:51.082749    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:51.421905    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:51.451487    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:51.532706    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:51.597523    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:51.925284    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:51.957388    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:52.041573    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:52.084991    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:52.228555    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:52.416860    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:52.447542    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:52.546509    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:52.589980    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:52.924925    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:52.954780    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:53.034851    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:53.093257    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:53.411965    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:53.457796    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:53.539735    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:53.597731    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:53.915755    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:53.947273    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:54.044663    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:54.088599    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:54.423696    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:54.455569    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:54.536145    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:54.595997    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:54.726826    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:54.913903    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:54.960157    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:55.041760    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:55.085321    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:55.418374    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:55.448886    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:55.547565    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:55.590985    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:55.947922    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:55.954707    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:56.036312    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:56.095814    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:56.415568    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:56.460496    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:56.543437    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:56.589089    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:56.925066    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:57.207808    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:57.210789    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:57.211043    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:57.219672    9736 pod_ready.go:102] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"False"
	I0501 02:13:58.905233    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:58.909681    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:58.913249    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:58.915243    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:58.920452    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:58.929079    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:58.930148    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:58.938042    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:58.958307    9736 pod_ready.go:92] pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace has status "Ready":"True"
	I0501 02:13:58.958307    9736 pod_ready.go:81] duration metric: took 29.2502142s for pod "coredns-7db6d8ff4d-4xjl8" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:58.958307    9736 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:58.976282    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:58.979187    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:58.999036    9736 pod_ready.go:92] pod "etcd-addons-286100" in "kube-system" namespace has status "Ready":"True"
	I0501 02:13:58.999095    9736 pod_ready.go:81] duration metric: took 40.7876ms for pod "etcd-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:58.999153    9736 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.025445    9736 pod_ready.go:92] pod "kube-apiserver-addons-286100" in "kube-system" namespace has status "Ready":"True"
	I0501 02:13:59.025518    9736 pod_ready.go:81] duration metric: took 26.2919ms for pod "kube-apiserver-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.025538    9736 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.037242    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:59.041187    9736 pod_ready.go:92] pod "kube-controller-manager-addons-286100" in "kube-system" namespace has status "Ready":"True"
	I0501 02:13:59.041187    9736 pod_ready.go:81] duration metric: took 15.6484ms for pod "kube-controller-manager-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.041187    9736 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r9kpc" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.098193    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:59.124910    9736 pod_ready.go:92] pod "kube-proxy-r9kpc" in "kube-system" namespace has status "Ready":"True"
	I0501 02:13:59.124910    9736 pod_ready.go:81] duration metric: took 83.7227ms for pod "kube-proxy-r9kpc" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.124910    9736 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.414519    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:59.460422    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:13:59.521491    9736 pod_ready.go:92] pod "kube-scheduler-addons-286100" in "kube-system" namespace has status "Ready":"True"
	I0501 02:13:59.521491    9736 pod_ready.go:81] duration metric: took 396.5781ms for pod "kube-scheduler-addons-286100" in "kube-system" namespace to be "Ready" ...
	I0501 02:13:59.521491    9736 pod_ready.go:38] duration metric: took 49.8846077s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:13:59.521491    9736 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:13:59.536453    9736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:13:59.547075    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:13:59.569263    9736 api_server.go:72] duration metric: took 54.2851556s to wait for apiserver process to appear ...
	I0501 02:13:59.569339    9736 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:13:59.569464    9736 api_server.go:253] Checking apiserver healthz at https://172.28.215.237:8443/healthz ...
	I0501 02:13:59.576623    9736 api_server.go:279] https://172.28.215.237:8443/healthz returned 200:
	ok
	I0501 02:13:59.579634    9736 api_server.go:141] control plane version: v1.30.0
	I0501 02:13:59.579634    9736 api_server.go:131] duration metric: took 10.2945ms to wait for apiserver health ...
	I0501 02:13:59.579634    9736 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:13:59.583135    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:13:59.735841    9736 system_pods.go:59] 18 kube-system pods found
	I0501 02:13:59.735841    9736 system_pods.go:61] "coredns-7db6d8ff4d-4xjl8" [6f23e8cc-b5ce-4e18-a421-df93e04c2db5] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "csi-hostpath-attacher-0" [069fad50-0f17-4ead-a2ed-f386f8a83126] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0501 02:13:59.735841    9736 system_pods.go:61] "csi-hostpath-resizer-0" [8e81e43a-18c3-4966-a65f-5234153df19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0501 02:13:59.735841    9736 system_pods.go:61] "csi-hostpathplugin-lbg9p" [dbf40b58-dabc-4f69-8e29-95870f32a4d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0501 02:13:59.735841    9736 system_pods.go:61] "etcd-addons-286100" [083fbab3-a438-4acf-83c0-73060b5945f5] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "kube-apiserver-addons-286100" [f5bb274b-33d7-45dc-83a4-bdf0f3ac9a5b] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "kube-controller-manager-addons-286100" [25212c04-2439-4ecf-aaca-2938c8aab387] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "kube-ingress-dns-minikube" [af01edf4-6f54-4b4f-a1ea-8f7bf2d73615] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0501 02:13:59.735841    9736 system_pods.go:61] "kube-proxy-r9kpc" [469fe87b-8af8-4ebe-b55b-fa462f30fdfa] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "kube-scheduler-addons-286100" [9d88e85e-929d-4e82-8dc7-b592cb63caa2] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "metrics-server-c59844bb4-vzwxj" [d2949e26-7e88-45f4-a7c2-c5aaffe4beb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 02:13:59.735841    9736 system_pods.go:61] "nvidia-device-plugin-daemonset-t2wt6" [8d3cb0b0-5f2a-433f-a53b-56986c0857e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0501 02:13:59.735841    9736 system_pods.go:61] "registry-56vl8" [8f7e03d5-5db3-4ed8-95e9-8472acc1061c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0501 02:13:59.735841    9736 system_pods.go:61] "registry-proxy-jnrbf" [eb5876d4-d74b-4b9a-a081-bf0997fc06b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0501 02:13:59.735841    9736 system_pods.go:61] "snapshot-controller-745499f584-5f6m2" [6a9210b4-9b95-40e6-bd66-37bfeb04df1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:13:59.735841    9736 system_pods.go:61] "snapshot-controller-745499f584-zgd57" [feee1c49-f48f-460e-a645-9cf7f093d8f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:13:59.735841    9736 system_pods.go:61] "storage-provisioner" [b1352364-acf5-4c2f-8d5b-bc9596cc43cf] Running
	I0501 02:13:59.735841    9736 system_pods.go:61] "tiller-deploy-6677d64bcd-jlfgj" [0cfc799e-c246-4fd2-adca-3f30f53bb411] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0501 02:13:59.735841    9736 system_pods.go:74] duration metric: took 156.2062ms to wait for pod list to return data ...
	I0501 02:13:59.735841    9736 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:13:59.919522    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:13:59.922012    9736 default_sa.go:45] found service account: "default"
	I0501 02:13:59.922084    9736 default_sa.go:55] duration metric: took 186.2415ms for default service account to be created ...
	I0501 02:13:59.922084    9736 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:13:59.947522    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:00.046524    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:00.089861    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:00.133855    9736 system_pods.go:86] 18 kube-system pods found
	I0501 02:14:00.134061    9736 system_pods.go:89] "coredns-7db6d8ff4d-4xjl8" [6f23e8cc-b5ce-4e18-a421-df93e04c2db5] Running
	I0501 02:14:00.134061    9736 system_pods.go:89] "csi-hostpath-attacher-0" [069fad50-0f17-4ead-a2ed-f386f8a83126] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0501 02:14:00.134061    9736 system_pods.go:89] "csi-hostpath-resizer-0" [8e81e43a-18c3-4966-a65f-5234153df19b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0501 02:14:00.134061    9736 system_pods.go:89] "csi-hostpathplugin-lbg9p" [dbf40b58-dabc-4f69-8e29-95870f32a4d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0501 02:14:00.134061    9736 system_pods.go:89] "etcd-addons-286100" [083fbab3-a438-4acf-83c0-73060b5945f5] Running
	I0501 02:14:00.134131    9736 system_pods.go:89] "kube-apiserver-addons-286100" [f5bb274b-33d7-45dc-83a4-bdf0f3ac9a5b] Running
	I0501 02:14:00.134131    9736 system_pods.go:89] "kube-controller-manager-addons-286100" [25212c04-2439-4ecf-aaca-2938c8aab387] Running
	I0501 02:14:00.134131    9736 system_pods.go:89] "kube-ingress-dns-minikube" [af01edf4-6f54-4b4f-a1ea-8f7bf2d73615] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0501 02:14:00.134131    9736 system_pods.go:89] "kube-proxy-r9kpc" [469fe87b-8af8-4ebe-b55b-fa462f30fdfa] Running
	I0501 02:14:00.134131    9736 system_pods.go:89] "kube-scheduler-addons-286100" [9d88e85e-929d-4e82-8dc7-b592cb63caa2] Running
	I0501 02:14:00.134131    9736 system_pods.go:89] "metrics-server-c59844bb4-vzwxj" [d2949e26-7e88-45f4-a7c2-c5aaffe4beb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0501 02:14:00.134131    9736 system_pods.go:89] "nvidia-device-plugin-daemonset-t2wt6" [8d3cb0b0-5f2a-433f-a53b-56986c0857e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0501 02:14:00.134228    9736 system_pods.go:89] "registry-56vl8" [8f7e03d5-5db3-4ed8-95e9-8472acc1061c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0501 02:14:00.134228    9736 system_pods.go:89] "registry-proxy-jnrbf" [eb5876d4-d74b-4b9a-a081-bf0997fc06b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0501 02:14:00.134228    9736 system_pods.go:89] "snapshot-controller-745499f584-5f6m2" [6a9210b4-9b95-40e6-bd66-37bfeb04df1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:14:00.134228    9736 system_pods.go:89] "snapshot-controller-745499f584-zgd57" [feee1c49-f48f-460e-a645-9cf7f093d8f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0501 02:14:00.134295    9736 system_pods.go:89] "storage-provisioner" [b1352364-acf5-4c2f-8d5b-bc9596cc43cf] Running
	I0501 02:14:00.134295    9736 system_pods.go:89] "tiller-deploy-6677d64bcd-jlfgj" [0cfc799e-c246-4fd2-adca-3f30f53bb411] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0501 02:14:00.134295    9736 system_pods.go:126] duration metric: took 212.2093ms to wait for k8s-apps to be running ...
	I0501 02:14:00.134295    9736 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:14:00.146978    9736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:14:00.182390    9736 system_svc.go:56] duration metric: took 48.0948ms WaitForService to wait for kubelet
	I0501 02:14:00.182513    9736 kubeadm.go:576] duration metric: took 54.898278s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:14:00.182513    9736 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:14:00.327415    9736 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:14:00.327524    9736 node_conditions.go:123] node cpu capacity is 2
	I0501 02:14:00.327590    9736 node_conditions.go:105] duration metric: took 145.0762ms to run NodePressure ...
	I0501 02:14:00.327590    9736 start.go:240] waiting for startup goroutines ...
	I0501 02:14:00.423914    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:00.453000    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:00.534180    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:00.742929    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:01.021473    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:01.022509    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:01.035011    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:01.095775    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:01.422006    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:01.462795    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:01.550228    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:01.607544    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:01.919091    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:01.950195    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:02.050724    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:02.106396    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:02.420909    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:02.458044    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:02.546315    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:02.584846    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:02.920093    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:02.950677    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:03.034751    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:03.092506    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:03.411474    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:03.457068    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:03.537674    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:03.582774    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:03.918239    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:03.947834    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:04.044691    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:04.087894    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:04.422441    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:04.450627    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:04.534125    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:04.594222    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:04.911930    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:04.958452    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:05.040840    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:05.086849    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:05.421929    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:05.452524    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:05.534622    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:05.596116    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:05.914435    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:05.961009    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:06.042863    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:06.094520    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:06.421120    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:06.452200    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:06.534542    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:06.592997    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:06.916212    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:06.959989    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:07.040688    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:07.089776    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:07.420486    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:07.450820    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:07.546629    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:07.590568    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:07.925554    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:07.957192    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:08.038546    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:08.097016    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:08.419273    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:08.448705    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:08.542360    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:08.586752    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:08.919581    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:08.950287    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:09.047630    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:09.092145    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:09.424211    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:09.464848    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:09.536192    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:09.595371    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:09.916346    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:09.946502    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:10.042501    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:10.087792    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:10.418481    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:10.446664    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:10.542236    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:10.587409    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:10.921505    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:10.950788    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:11.049286    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:11.093465    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:11.430432    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:11.454630    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:11.533868    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:11.593306    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:11.911565    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:11.957517    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:12.040600    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:12.098286    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:12.418737    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:12.448505    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:12.541058    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:12.586085    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:12.921525    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:12.949884    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:13.048867    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:13.091265    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:13.419025    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:13.445997    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:13.550424    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:13.587100    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:13.921742    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:13.953726    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:14.033204    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:14.095908    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:14.412311    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:14.459898    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:14.540437    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:14.583419    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:14.918133    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:14.949661    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:15.047852    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:15.094897    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:15.423836    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:15.453399    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:15.535183    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:15.596048    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:15.916286    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:15.960900    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:16.043888    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:16.086870    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:16.422457    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:16.452836    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:16.534659    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:16.594348    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:16.913191    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:16.961906    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:17.045626    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:17.088718    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:17.421940    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:17.451539    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:17.546955    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:17.748998    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:18.098435    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:18.098435    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:18.105283    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:18.107739    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:18.421362    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:18.450000    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:18.546832    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:18.592463    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:18.926183    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:18.957720    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:19.039374    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:19.087725    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:19.415668    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:19.469675    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:19.562684    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:19.590467    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:19.921890    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:19.952349    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:20.047067    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:20.090001    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:20.426200    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:20.453203    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:20.535435    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:20.594734    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:20.914938    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:20.961018    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:21.042127    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:21.087279    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:21.418697    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:21.448021    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:21.544233    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:21.587206    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:22.244769    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:22.246845    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:22.247072    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:22.252369    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:22.421945    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:22.449057    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:22.544824    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:22.586882    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:22.925590    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:22.945763    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:23.043996    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:23.086205    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:23.420495    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:23.448763    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:23.541941    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:23.604984    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:23.920082    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:23.948924    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:24.048473    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:24.090481    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:24.425039    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:24.462649    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:24.543583    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:24.591174    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:24.927880    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:24.970576    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:25.047041    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:25.097458    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:25.418480    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:25.467932    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:25.553174    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:25.585446    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:25.926722    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:25.950886    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:26.046825    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:26.089867    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:26.425041    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:26.454784    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:26.543990    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:26.595157    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:26.917547    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:26.985856    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:27.043431    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:27.087715    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:27.425656    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:27.453862    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:27.533189    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:27.592651    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:27.917728    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:27.954323    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:28.044410    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:28.087868    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:28.422570    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:28.454013    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:28.538520    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:28.595714    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:28.912628    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:28.959174    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:29.040538    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:29.085459    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:29.421098    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:29.448710    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:29.546238    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:29.590539    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:29.926205    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:29.953409    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:30.034606    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:30.094165    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:30.412855    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:30.459558    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:30.538929    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:30.597076    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:30.919083    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:30.947331    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:31.044050    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:31.087422    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:31.422900    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:31.452949    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:31.555483    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:32.100654    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:32.100654    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:32.101681    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:32.102655    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:32.475366    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:32.476100    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:32.476813    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:32.548912    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:32.706620    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:32.926788    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:32.953665    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:33.037155    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:33.095790    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:33.416698    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:33.460923    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:33.546695    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:33.585258    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:33.920249    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:33.949220    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:34.047223    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:34.090240    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:34.429259    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:34.456294    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:34.538310    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:34.597131    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:34.917905    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:34.948631    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:35.046660    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:35.089055    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:35.438588    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:35.456054    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:35.536671    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:35.594682    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:35.930228    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:35.955652    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:36.037699    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:36.096215    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:36.459471    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:36.470802    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:36.548137    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:36.583721    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:36.918204    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:36.950293    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:37.047774    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:37.091248    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:37.427306    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:37.455431    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:37.536759    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:37.596172    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:37.915218    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:37.947840    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:38.044316    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:38.089375    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:38.425011    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:38.452009    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:38.534008    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:38.593562    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:38.913363    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:38.959265    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:39.039378    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:39.098391    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:39.421586    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:39.455858    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:39.557514    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:39.591329    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:39.933343    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:39.955318    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:40.035265    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:40.095073    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:40.423137    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:40.458422    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:40.534588    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:40.708438    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:40.955806    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:40.957791    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:41.055040    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:41.089342    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:41.428333    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:41.487548    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:41.535408    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:41.603942    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:41.927796    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:41.973308    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:42.088552    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:42.117557    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:42.419533    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:42.458151    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:42.550920    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:42.602653    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:42.919303    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:42.948809    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:43.043729    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:43.087207    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:43.427680    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:43.455290    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:43.541527    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:43.597682    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:43.918750    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:43.949506    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:44.045313    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:44.092986    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:44.422337    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:44.450341    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:44.533390    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:44.591661    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:44.927261    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:44.956351    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:45.031215    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:45.083226    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:45.421884    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:45.452956    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:45.546837    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:45.591360    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:45.923767    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:45.954666    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:46.036676    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:46.095368    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:46.414473    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:46.460362    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:46.544818    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:46.587942    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:46.923565    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:46.953154    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:47.046720    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:47.088405    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:47.422077    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:47.452860    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:47.534526    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:47.593636    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:47.918893    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:47.962203    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:48.040011    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:48.087158    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:48.416712    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:48.446792    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:48.544288    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:48.589359    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:48.927056    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:48.954334    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:49.035568    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:49.096941    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:49.418029    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:49.448922    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:49.544026    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:49.587448    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:49.924102    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:49.953761    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:50.035319    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:50.092230    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:50.430087    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:50.457576    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:50.540246    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:50.597169    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:50.925617    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:50.956246    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:51.037778    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:51.193865    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:51.545134    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:51.549631    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:51.554624    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:51.583566    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:51.918520    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:51.949919    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:52.044613    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:52.090897    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:52.424655    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:52.451839    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:52.550383    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:52.592235    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:52.912432    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:52.960717    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:53.044447    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:53.086009    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:53.416267    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:53.462954    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:53.546815    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:53.593180    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:54.193169    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:54.198801    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:54.201223    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:54.201854    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:54.502574    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:54.502684    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:54.538648    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:55.305036    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:55.307544    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:55.311591    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:55.313202    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:55.446510    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:55.449047    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:55.451789    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:55.751134    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:55.752322    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:55.922227    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:55.952257    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:56.036068    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:56.093241    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:56.434741    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0501 02:14:56.455160    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:56.537912    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:56.597416    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:56.928719    9736 kapi.go:107] duration metric: took 1m26.0239347s to wait for kubernetes.io/minikube-addons=registry ...
	I0501 02:14:56.960945    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:57.042390    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:57.096946    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:57.479548    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:57.546092    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:57.585592    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:57.950873    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:58.046813    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:58.091771    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:58.453264    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:58.547531    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:58.591212    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:58.957200    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:59.038768    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:59.086239    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:59.462016    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:14:59.541377    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:14:59.583570    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:14:59.962260    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:00.044627    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:00.090872    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:00.460745    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:00.559645    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:00.592443    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:00.956492    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:01.039107    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:01.096854    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:01.448766    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:01.544362    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:01.588319    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:01.953015    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:02.049851    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:02.092993    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:02.458770    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:02.537840    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:02.598531    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:02.950659    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:03.046489    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:03.088209    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:03.456235    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:03.536502    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:03.594189    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:03.956958    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:04.036338    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:04.095691    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:04.447019    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:04.543441    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:04.586332    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:05.548846    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:05.557098    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:05.566577    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:05.570445    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:05.574717    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:05.893682    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:06.424373    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:06.424846    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:06.428693    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:06.454092    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:06.546118    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:06.590257    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:06.956271    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:07.037117    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:07.094707    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:07.461562    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:07.543219    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:07.585746    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:07.952690    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:08.048909    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:08.091409    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:08.458334    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:08.538204    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:08.597582    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:08.947220    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:09.055526    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:09.086006    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:09.470561    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:09.568113    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:09.601674    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:09.954851    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:10.038421    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:10.098399    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:10.461853    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:10.539851    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:10.582413    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:10.949427    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:11.046015    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:11.091042    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:11.452869    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:11.547318    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:11.590431    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:11.958449    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:12.037936    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:12.096772    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:12.461594    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:12.543073    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:12.587382    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:12.955162    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:13.048540    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:13.102635    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:13.457282    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:13.541543    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:13.596424    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:13.963109    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:14.052220    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:14.403497    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:14.452467    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:14.548094    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:14.593565    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:14.957443    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:15.037025    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:15.096094    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:15.463377    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:15.540310    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:15.597907    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:15.962178    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:16.042093    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:16.087353    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:16.449930    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:16.545856    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:16.590715    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:16.952477    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:17.031969    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:17.090931    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:17.458899    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:17.538294    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:17.582318    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:17.964363    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:18.045236    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:18.303567    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:18.891751    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:18.892991    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:18.893049    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:18.958442    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:19.047682    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:19.810542    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:19.816178    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:19.817668    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:19.821389    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:19.957001    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:20.040395    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:20.099885    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:20.450102    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:20.554316    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:20.591479    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:20.960515    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:21.049118    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:21.094396    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:21.458361    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:21.547464    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:21.598098    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:21.946914    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:22.043150    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:22.089805    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:22.454762    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:22.539675    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:22.595889    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:22.958535    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:23.040950    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:23.096271    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:23.448200    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:23.544875    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:23.589975    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:23.958842    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:24.039373    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:24.096244    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:24.470894    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:24.541202    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:24.583518    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:24.954285    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:25.033874    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:25.092870    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:25.468585    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:25.554146    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:25.583232    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:26.745271    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:26.837008    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:26.839009    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:26.839009    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:26.847565    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:26.849141    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:26.958330    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:27.036372    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:27.099746    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:27.466880    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:27.544304    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:27.587301    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:27.983551    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:28.094092    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:28.099025    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:28.454119    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:28.540610    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:28.595657    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:28.961769    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:29.039190    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:29.084932    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:29.451283    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:29.546949    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:29.587121    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:30.036733    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:30.046377    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:30.101027    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:30.456016    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:30.539208    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:30.596114    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:30.961689    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:31.041995    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:31.085664    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:31.449492    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:31.546397    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:31.590595    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:31.954109    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:32.036273    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:32.094727    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:32.457948    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:32.540176    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:32.582671    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:32.947529    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:33.043691    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:33.087590    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:33.451097    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:33.539179    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:33.810519    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:33.958741    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:34.042317    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:34.096447    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:34.460205    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:34.539658    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:34.585021    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:34.953966    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:35.104358    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:35.178519    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:35.455996    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:35.538660    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:35.597121    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:35.948526    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:36.044047    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:36.087781    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:36.450545    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:36.545953    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:36.589780    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:36.970431    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:37.039818    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:37.100172    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:37.464381    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:37.548340    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:37.589250    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:37.954133    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:38.049501    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:38.090727    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:38.457289    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:38.544535    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:38.595788    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:38.947449    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:39.403066    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:39.403066    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:39.454271    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:39.556904    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:39.591561    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:39.960596    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:40.036274    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:40.106089    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:40.464286    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:40.544172    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:40.589910    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:40.947958    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:41.043882    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:41.086569    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:41.453901    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:41.534715    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:41.600718    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:41.959067    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:42.047283    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:42.087227    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:42.457100    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:42.540472    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:42.597240    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:42.959739    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:43.051881    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:43.107892    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:43.466279    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:43.545538    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:43.588704    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:43.954536    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:44.035130    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:44.092416    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:44.462999    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:44.535445    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:44.598186    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:44.948944    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:45.046350    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:45.091564    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:45.456320    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:45.539586    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:45.597343    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:45.948724    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:46.037251    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:46.095710    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:46.598683    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:46.600188    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:46.610196    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:46.962363    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:47.049375    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:47.097943    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:47.454459    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:47.539344    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:47.594853    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:47.968419    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:48.034043    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:48.094985    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:48.450366    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:48.547164    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:48.590288    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:48.954416    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:49.033712    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:49.092976    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:49.459145    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:49.537261    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:49.597561    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:49.960332    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:50.038534    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:50.098437    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:50.462272    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:50.542956    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:50.586409    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:50.953567    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:51.043944    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:51.094190    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:51.457500    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:51.541526    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:51.597786    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:51.957245    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:52.044146    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:52.092065    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:52.449703    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:52.547793    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:52.595318    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:52.960593    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:53.040219    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:53.097728    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:53.448427    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:53.555205    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:53.586932    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:53.953241    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:54.033887    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:54.092733    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:54.458379    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:54.548035    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:54.598466    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:54.950661    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:55.051867    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:55.098005    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:55.453104    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:55.534555    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:55.597976    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:55.957348    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:56.039673    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:56.090611    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:56.450696    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:56.545801    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:56.588808    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:56.958830    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:57.057851    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:57.104634    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:57.463404    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:57.552144    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:57.590447    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:57.951791    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:58.050628    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:58.090917    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:58.453572    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:58.535280    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:58.593913    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:58.958325    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:59.041652    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:59.096204    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:59.448121    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:15:59.542004    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:15:59.589180    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:15:59.952620    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:00.048390    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:00.091272    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:00.457297    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:00.543514    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:00.586026    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:00.950606    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:01.046683    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:01.095459    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:01.457526    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:01.539619    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:01.597526    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:01.961109    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:02.043283    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:02.085240    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:02.459409    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:02.571559    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:02.598261    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:02.954765    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:03.033935    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:03.102614    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:03.484617    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:03.555810    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:03.584921    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:03.949209    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:04.047186    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:04.088355    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:04.459435    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:04.537834    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:04.596629    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:04.961882    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:05.043567    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:05.087808    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:05.453162    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:05.548750    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:05.596681    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:05.956825    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:06.039450    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:06.097701    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:06.451877    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:06.547181    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:06.591077    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:07.346381    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:07.349160    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:07.352803    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:07.739241    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:07.742342    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:07.744134    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:08.107851    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:08.109053    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:08.115416    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:08.456413    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:08.540279    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:08.582951    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:08.950486    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:09.044849    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:09.091693    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:09.457716    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:09.534337    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:09.595332    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:09.966060    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:10.046064    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:10.090058    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:10.450059    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:10.548657    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:10.594143    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:10.961470    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:11.041210    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:11.086212    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:11.455958    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:11.549939    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:11.593242    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:11.955925    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:12.037389    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:12.095636    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:12.460227    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:12.541634    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:12.586095    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:12.951692    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:13.049425    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:13.092557    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:13.456734    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:13.537403    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:13.596621    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:13.961679    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:14.044125    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:14.086745    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:14.457013    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:14.533557    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:14.594777    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:14.959570    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:15.040280    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:15.086854    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:15.451146    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:15.545673    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:15.588987    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:15.956626    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:16.035633    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:16.094032    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:16.462568    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:16.553270    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:17.247165    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:17.252962    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:17.255399    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:17.257397    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:18.444512    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:18.445100    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:18.447005    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:18.547526    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:18.584624    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:18.588656    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:18.589613    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:18.598123    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:18.949015    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:19.048161    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:19.092203    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:19.454106    9736 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0501 02:16:19.555069    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:19.593449    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:19.956712    9736 kapi.go:107] duration metric: took 2m39.5161083s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0501 02:16:20.037229    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:20.096471    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:20.542480    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:20.602746    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:21.047746    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:21.123266    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:21.548757    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:21.592134    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:22.037564    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:22.099177    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:22.542520    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:22.588174    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:23.049387    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:23.092883    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:23.554303    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:23.990568    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:24.042643    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:24.085071    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:24.548456    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:24.591608    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0501 02:16:25.040748    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:25.111766    9736 kapi.go:107] duration metric: took 2m39.5335531s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0501 02:16:25.115791    9736 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-286100 cluster.
	I0501 02:16:25.119757    9736 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0501 02:16:25.123763    9736 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0501 02:16:25.545485    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:26.033681    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:26.539003    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:27.050867    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:27.540687    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:28.059228    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:28.540226    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:29.042137    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:30.041235    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:30.069377    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:30.541514    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:31.045009    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:31.537848    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:32.047171    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:32.534629    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:33.038917    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:33.549450    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:34.038573    9736 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0501 02:16:34.540363    9736 kapi.go:107] duration metric: took 2m51.5171615s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0501 02:16:34.551632    9736 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, helm-tiller, cloud-spanner, storage-provisioner, yakd, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0501 02:16:34.561312    9736 addons.go:505] duration metric: took 3m29.2761482s for enable addons: enabled=[nvidia-device-plugin ingress-dns helm-tiller cloud-spanner storage-provisioner yakd inspektor-gadget metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0501 02:16:34.561443    9736 start.go:245] waiting for cluster config update ...
	I0501 02:16:34.561443    9736 start.go:254] writing updated cluster config ...
	I0501 02:16:34.578121    9736 ssh_runner.go:195] Run: rm -f paused
	I0501 02:16:34.868251    9736 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:16:34.872287    9736 out.go:177] * Done! kubectl is now configured to use "addons-286100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 02:17:18 addons-286100 dockerd[1332]: time="2024-05-01T02:17:18.047884582Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 02:17:18 addons-286100 dockerd[1326]: time="2024-05-01T02:17:18.098209153Z" level=warning msg="failed to close stdin: task b9f05ae6a64c6bba4ef997e01b9b60b9faaf5c5b31ca4ef734840fd659ba9642 not found: not found"
	May 01 02:17:19 addons-286100 dockerd[1326]: time="2024-05-01T02:17:19.713364343Z" level=info msg="ignoring event" container=f628e774743f7409db361d4a3d8ff6dbb7fce50d5e65eb5d262978f6497ffac0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 02:17:19 addons-286100 dockerd[1332]: time="2024-05-01T02:17:19.714051448Z" level=info msg="shim disconnected" id=f628e774743f7409db361d4a3d8ff6dbb7fce50d5e65eb5d262978f6497ffac0 namespace=moby
	May 01 02:17:19 addons-286100 dockerd[1332]: time="2024-05-01T02:17:19.714153948Z" level=warning msg="cleaning up after shim disconnected" id=f628e774743f7409db361d4a3d8ff6dbb7fce50d5e65eb5d262978f6497ffac0 namespace=moby
	May 01 02:17:19 addons-286100 dockerd[1332]: time="2024-05-01T02:17:19.714562951Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 02:17:19 addons-286100 dockerd[1332]: time="2024-05-01T02:17:19.740795544Z" level=warning msg="cleanup warnings time=\"2024-05-01T02:17:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 01 02:17:20 addons-286100 dockerd[1326]: time="2024-05-01T02:17:20.854089112Z" level=info msg="ignoring event" container=539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 02:17:20 addons-286100 dockerd[1332]: time="2024-05-01T02:17:20.856616627Z" level=info msg="shim disconnected" id=539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0 namespace=moby
	May 01 02:17:20 addons-286100 dockerd[1332]: time="2024-05-01T02:17:20.856735928Z" level=warning msg="cleaning up after shim disconnected" id=539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0 namespace=moby
	May 01 02:17:20 addons-286100 dockerd[1332]: time="2024-05-01T02:17:20.856760928Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 02:17:21 addons-286100 dockerd[1326]: time="2024-05-01T02:17:21.123860740Z" level=info msg="ignoring event" container=ebd2bd62d57d0edd4972b70eb3ceb8bf76d6c7c5b75631c0d14419a80f411b85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 02:17:21 addons-286100 dockerd[1332]: time="2024-05-01T02:17:21.124362043Z" level=info msg="shim disconnected" id=ebd2bd62d57d0edd4972b70eb3ceb8bf76d6c7c5b75631c0d14419a80f411b85 namespace=moby
	May 01 02:17:21 addons-286100 dockerd[1332]: time="2024-05-01T02:17:21.124441543Z" level=warning msg="cleaning up after shim disconnected" id=ebd2bd62d57d0edd4972b70eb3ceb8bf76d6c7c5b75631c0d14419a80f411b85 namespace=moby
	May 01 02:17:21 addons-286100 dockerd[1332]: time="2024-05-01T02:17:21.124458043Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.161975552Z" level=info msg="shim disconnected" id=d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2 namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.162096552Z" level=warning msg="cleaning up after shim disconnected" id=d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2 namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.162120052Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1326]: time="2024-05-01T02:17:29.164630453Z" level=info msg="ignoring event" container=d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.192215263Z" level=warning msg="cleanup warnings time=\"2024-05-01T02:17:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1326]: time="2024-05-01T02:17:29.427916647Z" level=info msg="ignoring event" container=5df06ca640f4cd2aa6d1ca20ede13bfd5598cf0655c5fb9116b78ea3b01b80c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.428834447Z" level=info msg="shim disconnected" id=5df06ca640f4cd2aa6d1ca20ede13bfd5598cf0655c5fb9116b78ea3b01b80c7 namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.429028147Z" level=warning msg="cleaning up after shim disconnected" id=5df06ca640f4cd2aa6d1ca20ede13bfd5598cf0655c5fb9116b78ea3b01b80c7 namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.429425847Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 02:17:29 addons-286100 dockerd[1332]: time="2024-05-01T02:17:29.456652057Z" level=warning msg="cleanup warnings time=\"2024-05-01T02:17:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	b9f05ae6a64c6       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                          15 seconds ago       Exited              helm-test                                0                   f628e774743f7       helm-test
	48d6f50e61efc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1                            19 seconds ago       Exited              gadget                                   3                   cccfb5003d97c       gadget-xh7x6
	faa8750eb71ac       a416a98b71e22                                                                                                                                30 seconds ago       Exited              helper-pod                               0                   00dfb2db8504d       helper-pod-delete-pvc-0464956f-1861-4caa-83a8-1de4d13a8aba
	bf186f31b7109       busybox@sha256:6776a33c72b3af7582a5b301e3a08186f2c21a3409f0d2b52dfddbdbe24a5b04                                                              46 seconds ago       Exited              busybox                                  0                   228e09c16ebf9       test-local-path
	217d62944611b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          58 seconds ago       Running             csi-snapshotter                          0                   7c35bfc1a1ca5       csi-hostpathplugin-lbg9p
	aeba92ad08e8b       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   7c35bfc1a1ca5       csi-hostpathplugin-lbg9p
	8ea83e5f3790a       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   7c35bfc1a1ca5       csi-hostpathplugin-lbg9p
	59077be624be8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   b4719ad526c29       gcp-auth-5db96cd9b4-jfq5b
	70ae362f6ba4f       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   975a981530ab3       ingress-nginx-controller-768f948f8f-shsqs
	7a3a183f9de8b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   7c35bfc1a1ca5       csi-hostpathplugin-lbg9p
	f7c4738ed21f1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   7c35bfc1a1ca5       csi-hostpathplugin-lbg9p
	eada26138cd44       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   1918806c69377       csi-hostpath-attacher-0
	0f7dc1d341144       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   7c35bfc1a1ca5       csi-hostpathplugin-lbg9p
	4c3c20a2727e3       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   508c249e6cb39       csi-hostpath-resizer-0
	3999a0ba298f4       684c5ea3b61b2                                                                                                                                About a minute ago   Exited              patch                                    1                   8038436949f95       ingress-nginx-admission-patch-82sx4
	2b558a5352849       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   About a minute ago   Exited              create                                   0                   8452f0e7d6f3f       ingress-nginx-admission-create-qh886
	4c0c4afaf21d0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   9795bf95daee0       snapshot-controller-745499f584-5f6m2
	40e83acbd1966       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      About a minute ago   Running             volume-snapshot-controller               0                   1c19e0f60c570       snapshot-controller-745499f584-zgd57
	0b78406afb999       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   9f1596ddf75d2       local-path-provisioner-8d985888d-7n9mn
	897f474970e4f       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   7ad4d292685c9       yakd-dashboard-5ddbf7d777-5g6gc
	05bd8e42f5965       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   13a433ae3662e       tiller-deploy-6677d64bcd-jlfgj
	9dd1e612b8a6f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   dc47b31fbbb92       kube-ingress-dns-minikube
	b4a74a51506c5       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   8027c04c6d175       nvidia-device-plugin-daemonset-t2wt6
	3c7a52baeb737       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   1b33bec42b509       storage-provisioner
	8aa55d048df72       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   f8b5fa6d18235       coredns-7db6d8ff4d-4xjl8
	dc2c7dc9750de       a0bf559e280cf                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   d1c4b91e174e9       kube-proxy-r9kpc
	a17a9d527e7e7       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   4781192aad7ff       kube-scheduler-addons-286100
	951ee2cfdf9f5       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   6e932402b3039       kube-controller-manager-addons-286100
	af67cb32e24c0       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   969b75fa9b7c3       etcd-addons-286100
	41d81602a0f01       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   35add0da8a51c       kube-apiserver-addons-286100
	
	
	==> controller_ingress [70ae362f6ba4] <==
	W0501 02:16:18.876983       6 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0501 02:16:18.877398       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0501 02:16:18.891346       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0501 02:16:19.232586       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0501 02:16:19.263440       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0501 02:16:19.281284       6 nginx.go:264] "Starting NGINX Ingress controller"
	I0501 02:16:19.305885       6 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"88307df4-d349-49cf-9952-c82007d7fc80", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0501 02:16:19.306004       6 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"2d878dbc-1df5-46b5-8e21-b1abbd56326a", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0501 02:16:19.306025       6 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f1f2e3cd-1e48-465d-ab1c-5eb39a1b87dc", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0501 02:16:20.482576       6 nginx.go:307] "Starting NGINX process"
	I0501 02:16:20.482938       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0501 02:16:20.483872       6 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0501 02:16:20.486403       6 controller.go:190] "Configuration changes detected, backend reload required"
	I0501 02:16:20.509283       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0501 02:16:20.509337       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-shsqs"
	I0501 02:16:20.515901       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-shsqs" node="addons-286100"
	I0501 02:16:20.564372       6 controller.go:210] "Backend successfully reloaded"
	I0501 02:16:20.564881       6 controller.go:221] "Initial sync, sleeping for 1 second"
	I0501 02:16:20.564931       6 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-shsqs", UID:"41955350-a8a5-48e6-8a9c-42d754e8dfc6", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [8aa55d048df7] <==
	[INFO] 10.244.0.8:35054 - 36907 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000141s
	[INFO] 10.244.0.8:60021 - 47632 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001104s
	[INFO] 10.244.0.8:60021 - 5399 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001976s
	[INFO] 10.244.0.8:45225 - 8973 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000874s
	[INFO] 10.244.0.8:45225 - 36107 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0002569s
	[INFO] 10.244.0.8:48782 - 7027 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000266s
	[INFO] 10.244.0.8:48782 - 45425 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000227s
	[INFO] 10.244.0.8:38338 - 64655 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000341001s
	[INFO] 10.244.0.8:38338 - 39306 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0002571s
	[INFO] 10.244.0.8:36291 - 3876 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000778s
	[INFO] 10.244.0.8:36291 - 44601 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001901s
	[INFO] 10.244.0.8:38343 - 54636 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001256s
	[INFO] 10.244.0.8:38343 - 46703 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000308901s
	[INFO] 10.244.0.8:33341 - 53284 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000051301s
	[INFO] 10.244.0.8:33341 - 6433 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000398s
	[INFO] 10.244.0.22:42642 - 16886 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000353002s
	[INFO] 10.244.0.22:36179 - 58811 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000449103s
	[INFO] 10.244.0.22:49030 - 15286 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000213201s
	[INFO] 10.244.0.22:42969 - 33425 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001024707s
	[INFO] 10.244.0.22:58118 - 4273 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000275302s
	[INFO] 10.244.0.22:55442 - 18741 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.001065807s
	[INFO] 10.244.0.22:41900 - 55635 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.003237822s
	[INFO] 10.244.0.22:40579 - 23431 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.004511831s
	[INFO] 10.244.0.25:47677 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000359096s
	[INFO] 10.244.0.25:32917 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000448295s
	
	
	==> describe nodes <==
	Name:               addons-286100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-286100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=addons-286100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_12_52_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-286100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-286100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:12:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-286100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:17:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:17:29 +0000   Wed, 01 May 2024 02:12:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:17:29 +0000   Wed, 01 May 2024 02:12:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:17:29 +0000   Wed, 01 May 2024 02:12:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:17:29 +0000   Wed, 01 May 2024 02:12:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.215.237
	  Hostname:    addons-286100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 b28ec8b61dba44909384bfe13ca935cb
	  System UUID:                83a8c858-6be3-5547-afe5-6c9226718dca
	  Boot ID:                    0428ce7f-215a-45ae-a5d6-d5d4dace7f09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-xh7x6                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  gcp-auth                    gcp-auth-5db96cd9b4-jfq5b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-shsqs    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m51s
	  kube-system                 coredns-7db6d8ff4d-4xjl8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m26s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 csi-hostpathplugin-lbg9p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 etcd-addons-286100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-apiserver-addons-286100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-controller-manager-addons-286100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-proxy-r9kpc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-scheduler-addons-286100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 nvidia-device-plugin-daemonset-t2wt6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 snapshot-controller-745499f584-5f6m2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 snapshot-controller-745499f584-zgd57         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 tiller-deploy-6677d64bcd-jlfgj               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  local-path-storage          local-path-provisioner-8d985888d-7n9mn       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-5g6gc              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m12s  kube-proxy       
	  Normal  Starting                 4m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m39s  kubelet          Node addons-286100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s  kubelet          Node addons-286100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s  kubelet          Node addons-286100 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s  kubelet          Node addons-286100 status is now: NodeReady
	  Normal  RegisteredNode           4m27s  node-controller  Node addons-286100 event: Registered Node addons-286100 in Controller
	
	
	==> dmesg <==
	[  +0.166566] kauditd_printk_skb: 62 callbacks suppressed
	[May 1 02:13] systemd-fstab-generator[2320]: Ignoring "noauto" option for root device
	[  +1.141100] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.924041] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.381944] kauditd_printk_skb: 23 callbacks suppressed
	[ +10.738124] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.012006] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.861158] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.081004] kauditd_printk_skb: 96 callbacks suppressed
	[ +14.275933] kauditd_printk_skb: 49 callbacks suppressed
	[May 1 02:14] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.480315] hrtimer: interrupt took 2697105 ns
	[May 1 02:15] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.545416] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.919657] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.471216] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.151579] kauditd_printk_skb: 54 callbacks suppressed
	[May 1 02:16] kauditd_printk_skb: 22 callbacks suppressed
	[ +26.663928] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.213385] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.095111] kauditd_printk_skb: 33 callbacks suppressed
	[May 1 02:17] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.666099] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.086247] kauditd_printk_skb: 52 callbacks suppressed
	[ +15.016809] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [af67cb32e24c] <==
	{"level":"info","ts":"2024-05-01T02:16:30.031424Z","caller":"traceutil/trace.go:171","msg":"trace[489084909] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1277; }","duration":"380.902701ms","start":"2024-05-01T02:16:29.650508Z","end":"2024-05-01T02:16:30.031411Z","steps":["trace[489084909] 'range keys from in-memory index tree'  (duration: 379.879394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:16:30.031611Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:16:29.650492Z","time spent":"381.091503ms","remote":"127.0.0.1:57496","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-05-01T02:16:58.676527Z","caller":"traceutil/trace.go:171","msg":"trace[1831388181] transaction","detail":"{read_only:false; response_revision:1419; number_of_response:1; }","duration":"174.956106ms","start":"2024-05-01T02:16:58.501543Z","end":"2024-05-01T02:16:58.676499Z","steps":["trace[1831388181] 'process raft request'  (duration: 174.785906ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:17:09.53153Z","caller":"traceutil/trace.go:171","msg":"trace[1418568335] linearizableReadLoop","detail":"{readStateIndex:1545; appliedIndex:1544; }","duration":"333.098904ms","start":"2024-05-01T02:17:09.198407Z","end":"2024-05-01T02:17:09.531505Z","steps":["trace[1418568335] 'read index received'  (duration: 332.864303ms)","trace[1418568335] 'applied index is now lower than readState.Index'  (duration: 231.301µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T02:17:09.531955Z","caller":"traceutil/trace.go:171","msg":"trace[262784656] transaction","detail":"{read_only:false; response_revision:1471; number_of_response:1; }","duration":"335.478007ms","start":"2024-05-01T02:17:09.196466Z","end":"2024-05-01T02:17:09.531944Z","steps":["trace[262784656] 'process raft request'  (duration: 334.853706ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:09.532093Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:17:09.196443Z","time spent":"335.567808ms","remote":"127.0.0.1:57496","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1447 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-05-01T02:17:09.532807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.391106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-05-01T02:17:09.5329Z","caller":"traceutil/trace.go:171","msg":"trace[303801650] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1471; }","duration":"334.478406ms","start":"2024-05-01T02:17:09.198383Z","end":"2024-05-01T02:17:09.532861Z","steps":["trace[303801650] 'agreement among raft nodes before linearized reading'  (duration: 334.015305ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:09.532936Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:17:09.198374Z","time spent":"334.553506ms","remote":"127.0.0.1:57410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-05-01T02:17:09.541516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.638227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-05-01T02:17:09.541894Z","caller":"traceutil/trace.go:171","msg":"trace[792231403] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1471; }","duration":"217.686229ms","start":"2024-05-01T02:17:09.324191Z","end":"2024-05-01T02:17:09.541878Z","steps":["trace[792231403] 'agreement among raft nodes before linearized reading'  (duration: 208.947216ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:17:10.089513Z","caller":"traceutil/trace.go:171","msg":"trace[14842249] linearizableReadLoop","detail":"{readStateIndex:1546; appliedIndex:1545; }","duration":"350.85423ms","start":"2024-05-01T02:17:09.73864Z","end":"2024-05-01T02:17:10.089494Z","steps":["trace[14842249] 'read index received'  (duration: 350.70123ms)","trace[14842249] 'applied index is now lower than readState.Index'  (duration: 152.3µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T02:17:10.089724Z","caller":"traceutil/trace.go:171","msg":"trace[1814816802] transaction","detail":"{read_only:false; response_revision:1472; number_of_response:1; }","duration":"547.830128ms","start":"2024-05-01T02:17:09.541884Z","end":"2024-05-01T02:17:10.089714Z","steps":["trace[1814816802] 'process raft request'  (duration: 547.510928ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:10.089818Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:17:09.541862Z","time spent":"547.891328ms","remote":"127.0.0.1:57410","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1458 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-05-01T02:17:10.089938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.290731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6515"}
	{"level":"info","ts":"2024-05-01T02:17:10.089977Z","caller":"traceutil/trace.go:171","msg":"trace[25272240] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1472; }","duration":"351.470331ms","start":"2024-05-01T02:17:09.738497Z","end":"2024-05-01T02:17:10.089968Z","steps":["trace[25272240] 'agreement among raft nodes before linearized reading'  (duration: 351.06733ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:10.090017Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:17:09.738482Z","time spent":"351.529531ms","remote":"127.0.0.1:57420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":6538,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-05-01T02:17:10.381554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.308127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-05-01T02:17:10.381794Z","caller":"traceutil/trace.go:171","msg":"trace[1116286394] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1472; }","duration":"150.526327ms","start":"2024-05-01T02:17:10.231185Z","end":"2024-05-01T02:17:10.381711Z","steps":["trace[1116286394] 'range keys from in-memory index tree'  (duration: 150.134827ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:17.451532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.991124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:17:17.451597Z","caller":"traceutil/trace.go:171","msg":"trace[491941146] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1500; }","duration":"369.095624ms","start":"2024-05-01T02:17:17.082487Z","end":"2024-05-01T02:17:17.451583Z","steps":["trace[491941146] 'range keys from in-memory index tree'  (duration: 368.620321ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:17.451627Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:17:17.082468Z","time spent":"369.152524ms","remote":"127.0.0.1:57256","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-05-01T02:17:17.451941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.801426ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6591"}
	{"level":"info","ts":"2024-05-01T02:17:17.451988Z","caller":"traceutil/trace.go:171","msg":"trace[1451238918] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1500; }","duration":"355.896326ms","start":"2024-05-01T02:17:17.096081Z","end":"2024-05-01T02:17:17.451977Z","steps":["trace[1451238918] 'range keys from in-memory index tree'  (duration: 355.411523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:17:17.452017Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T02:17:17.096051Z","time spent":"355.959427ms","remote":"127.0.0.1:57420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":6614,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	
	
	==> gcp-auth [59077be624be] <==
	2024/05/01 02:16:24 GCP Auth Webhook started!
	2024/05/01 02:16:35 Ready to marshal response ...
	2024/05/01 02:16:35 Ready to write response ...
	2024/05/01 02:16:35 Ready to marshal response ...
	2024/05/01 02:16:35 Ready to write response ...
	2024/05/01 02:16:45 Ready to marshal response ...
	2024/05/01 02:16:45 Ready to write response ...
	2024/05/01 02:16:55 Ready to marshal response ...
	2024/05/01 02:16:55 Ready to write response ...
	2024/05/01 02:16:59 Ready to marshal response ...
	2024/05/01 02:16:59 Ready to write response ...
	2024/05/01 02:17:02 Ready to marshal response ...
	2024/05/01 02:17:02 Ready to write response ...
	
	
	==> kernel <==
	 02:17:31 up 6 min,  0 users,  load average: 3.25, 2.84, 1.34
	Linux addons-286100 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [41d81602a0f0] <==
	W0501 02:15:33.829544       1 handler_proxy.go:93] no RequestInfo found in the context
	E0501 02:15:33.829611       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0501 02:15:33.904519       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0501 02:16:17.241035       1 trace.go:236] Trace[2133548783]: "List" accept:application/json, */*,audit-id:aaa7e724-4f56-4403-8114-117e846b729b,client:172.28.208.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (01-May-2024 02:16:16.578) (total time: 662ms):
	Trace[2133548783]: ["List(recursive=true) etcd3" audit-id:aaa7e724-4f56-4403-8114-117e846b729b,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 662ms (02:16:16.578)]
	Trace[2133548783]: [662.977731ms] [662.977731ms] END
	I0501 02:16:17.244643       1 trace.go:236] Trace[716972342]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d36ceece-9423-4af2-a325-1e244b272aec,client:172.28.215.237,api-group:coordination.k8s.io,api-version:v1,name:addons-286100,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/addons-286100,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (01-May-2024 02:16:16.646) (total time: 598ms):
	Trace[716972342]: ["GuaranteedUpdate etcd3" audit-id:d36ceece-9423-4af2-a325-1e244b272aec,key:/leases/kube-node-lease/addons-286100,type:*coordination.Lease,resource:leases.coordination.k8s.io 598ms (02:16:16.646)
	Trace[716972342]:  ---"Txn call completed" 594ms (02:16:17.242)]
	Trace[716972342]: [598.313679ms] [598.313679ms] END
	I0501 02:16:18.438446       1 trace.go:236] Trace[521406011]: "List" accept:application/json, */*,audit-id:1ce72c6d-8e77-44f2-9424-c1279a187196,client:172.28.208.1,api-group:,api-version:v1,name:,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (01-May-2024 02:16:17.537) (total time: 901ms):
	Trace[521406011]: ["List(recursive=true) etcd3" audit-id:1ce72c6d-8e77-44f2-9424-c1279a187196,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 901ms (02:16:17.537)]
	Trace[521406011]: [901.318283ms] [901.318283ms] END
	I0501 02:16:18.440031       1 trace.go:236] Trace[28252321]: "List" accept:application/json, */*,audit-id:278e4f6f-1a99-49e7-971d-7a71cc63d848,client:172.28.208.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (01-May-2024 02:16:17.583) (total time: 856ms):
	Trace[28252321]: ["List(recursive=true) etcd3" audit-id:278e4f6f-1a99-49e7-971d-7a71cc63d848,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 856ms (02:16:17.583)]
	Trace[28252321]: [856.914874ms] [856.914874ms] END
	I0501 02:16:18.442179       1 trace.go:236] Trace[574484107]: "List" accept:application/json, */*,audit-id:3d024abe-bf71-423a-a7c4-7b61ceed2658,client:172.28.208.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (01-May-2024 02:16:17.443) (total time: 995ms):
	Trace[574484107]: ["List(recursive=true) etcd3" audit-id:3d024abe-bf71-423a-a7c4-7b61ceed2658,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 994ms (02:16:17.444)]
	Trace[574484107]: [995.153338ms] [995.153338ms] END
	I0501 02:17:10.091143       1 trace.go:236] Trace[754207307]: "Update" accept:application/json, */*,audit-id:f85e7f39-06e1-4f0d-a127-7ca789a5995b,client:172.28.215.237,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (01-May-2024 02:17:09.537) (total time: 553ms):
	Trace[754207307]: ["GuaranteedUpdate etcd3" audit-id:f85e7f39-06e1-4f0d-a127-7ca789a5995b,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 552ms (02:17:09.538)
	Trace[754207307]:  ---"Txn call completed" 551ms (02:17:10.090)]
	Trace[754207307]: [553.155236ms] [553.155236ms] END
	I0501 02:17:19.318812       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [951ee2cfdf9f] <==
	I0501 02:15:52.192500       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:15:53.432287       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:15:54.008559       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:15:54.539945       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:15:54.616702       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:15:55.042305       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:15:55.074487       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:15:55.092563       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:15:55.096736       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:15:55.550592       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:15:55.577631       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:15:55.583849       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:15:55.713463       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:16:19.705711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="97.601µs"
	I0501 02:16:24.880080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="37.396257ms"
	I0501 02:16:24.881162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="994.206µs"
	I0501 02:16:25.027001       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:16:25.034112       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:16:25.130585       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0501 02:16:25.156675       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0501 02:16:36.989216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="32.619219ms"
	I0501 02:16:36.997933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="126.001µs"
	I0501 02:16:57.320450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.3µs"
	I0501 02:17:08.866160       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="11.2µs"
	I0501 02:17:29.044831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6dc8d859f6" duration="5.5µs"
	
	
	==> kube-proxy [dc2c7dc9750d] <==
	I0501 02:13:18.859909       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:13:18.993024       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.215.237"]
	I0501 02:13:19.248593       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:13:19.249458       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:13:19.250746       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:13:19.269598       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:13:19.270801       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:13:19.271188       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:13:19.290397       1 config.go:192] "Starting service config controller"
	I0501 02:13:19.291124       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:13:19.291443       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:13:19.291820       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:13:19.293225       1 config.go:319] "Starting node config controller"
	I0501 02:13:19.293549       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:13:19.393338       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:13:19.393917       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:13:19.399566       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a17a9d527e7e] <==
	W0501 02:12:49.244493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 02:12:49.244997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 02:12:49.323310       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:12:49.323380       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:12:49.344421       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 02:12:49.344568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 02:12:49.380853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:12:49.381229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:12:49.439657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:12:49.439789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:12:49.585960       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:12:49.586182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 02:12:49.605837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 02:12:49.606347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 02:12:49.622073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:12:49.622496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:12:49.630813       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:12:49.630870       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 02:12:49.700523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 02:12:49.700583       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 02:12:49.744404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:12:49.744808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:12:49.777328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:12:49.777376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 02:12:52.898714       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.393648    2115 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v72sc\" (UniqueName: \"kubernetes.io/projected/6e2c6d2e-b904-409d-96c3-f878597af2bd-kube-api-access-v72sc\") pod \"6e2c6d2e-b904-409d-96c3-f878597af2bd\" (UID: \"6e2c6d2e-b904-409d-96c3-f878597af2bd\") "
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.393879    2115 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e2c6d2e-b904-409d-96c3-f878597af2bd-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6e2c6d2e-b904-409d-96c3-f878597af2bd" (UID: "6e2c6d2e-b904-409d-96c3-f878597af2bd"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.404552    2115 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^e0d4f3e3-0760-11ef-aace-dea836eabba0" (OuterVolumeSpecName: "task-pv-storage") pod "6e2c6d2e-b904-409d-96c3-f878597af2bd" (UID: "6e2c6d2e-b904-409d-96c3-f878597af2bd"). InnerVolumeSpecName "pvc-3897e494-c7ae-4c7d-a4d0-0e0598aa87df". PluginName "kubernetes.io/csi", VolumeGidValue ""
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.406886    2115 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e2c6d2e-b904-409d-96c3-f878597af2bd-kube-api-access-v72sc" (OuterVolumeSpecName: "kube-api-access-v72sc") pod "6e2c6d2e-b904-409d-96c3-f878597af2bd" (UID: "6e2c6d2e-b904-409d-96c3-f878597af2bd"). InnerVolumeSpecName "kube-api-access-v72sc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.494702    2115 reconciler_common.go:282] "operationExecutor.UnmountDevice started for volume \"pvc-3897e494-c7ae-4c7d-a4d0-0e0598aa87df\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e0d4f3e3-0760-11ef-aace-dea836eabba0\") on node \"addons-286100\" "
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.494842    2115 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v72sc\" (UniqueName: \"kubernetes.io/projected/6e2c6d2e-b904-409d-96c3-f878597af2bd-kube-api-access-v72sc\") on node \"addons-286100\" DevicePath \"\""
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.494865    2115 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6e2c6d2e-b904-409d-96c3-f878597af2bd-gcp-creds\") on node \"addons-286100\" DevicePath \"\""
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.506390    2115 operation_generator.go:1001] UnmountDevice succeeded for volume "pvc-3897e494-c7ae-4c7d-a4d0-0e0598aa87df" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^e0d4f3e3-0760-11ef-aace-dea836eabba0") on node "addons-286100"
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.597812    2115 reconciler_common.go:289] "Volume detached for volume \"pvc-3897e494-c7ae-4c7d-a4d0-0e0598aa87df\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^e0d4f3e3-0760-11ef-aace-dea836eabba0\") on node \"addons-286100\" DevicePath \"\""
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.614997    2115 scope.go:117] "RemoveContainer" containerID="539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0"
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.705498    2115 scope.go:117] "RemoveContainer" containerID="539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0"
	May 01 02:17:21 addons-286100 kubelet[2115]: E0501 02:17:21.708197    2115 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0" containerID="539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0"
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.708435    2115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0"} err="failed to get container status \"539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0\": rpc error: code = Unknown desc = Error response from daemon: No such container: 539df955e17d2b96dd460646ba31fcbfadb4635db0f4e4a27b6510309fd046f0"
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.970108    2115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e2c6d2e-b904-409d-96c3-f878597af2bd" path="/var/lib/kubelet/pods/6e2c6d2e-b904-409d-96c3-f878597af2bd/volumes"
	May 01 02:17:21 addons-286100 kubelet[2115]: I0501 02:17:21.972641    2115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81619f5-3c40-477f-981c-a510fdbc6d2c" path="/var/lib/kubelet/pods/d81619f5-3c40-477f-981c-a510fdbc6d2c/volumes"
	May 01 02:17:29 addons-286100 kubelet[2115]: I0501 02:17:29.692046    2115 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggssf\" (UniqueName: \"kubernetes.io/projected/57b9402e-c4da-463c-bfd2-aed7d4fe5cdb-kube-api-access-ggssf\") pod \"57b9402e-c4da-463c-bfd2-aed7d4fe5cdb\" (UID: \"57b9402e-c4da-463c-bfd2-aed7d4fe5cdb\") "
	May 01 02:17:29 addons-286100 kubelet[2115]: I0501 02:17:29.694610    2115 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b9402e-c4da-463c-bfd2-aed7d4fe5cdb-kube-api-access-ggssf" (OuterVolumeSpecName: "kube-api-access-ggssf") pod "57b9402e-c4da-463c-bfd2-aed7d4fe5cdb" (UID: "57b9402e-c4da-463c-bfd2-aed7d4fe5cdb"). InnerVolumeSpecName "kube-api-access-ggssf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 01 02:17:29 addons-286100 kubelet[2115]: I0501 02:17:29.793132    2115 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ggssf\" (UniqueName: \"kubernetes.io/projected/57b9402e-c4da-463c-bfd2-aed7d4fe5cdb-kube-api-access-ggssf\") on node \"addons-286100\" DevicePath \"\""
	May 01 02:17:29 addons-286100 kubelet[2115]: I0501 02:17:29.977577    2115 scope.go:117] "RemoveContainer" containerID="d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2"
	May 01 02:17:30 addons-286100 kubelet[2115]: I0501 02:17:30.032080    2115 scope.go:117] "RemoveContainer" containerID="d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2"
	May 01 02:17:30 addons-286100 kubelet[2115]: E0501 02:17:30.034406    2115 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2" containerID="d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2"
	May 01 02:17:30 addons-286100 kubelet[2115]: I0501 02:17:30.034553    2115 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2"} err="failed to get container status \"d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2\": rpc error: code = Unknown desc = Error response from daemon: No such container: d698a4be8c4dfd6140f3c3f82ba598089149f0343c372c60fa05669ef5459db2"
	May 01 02:17:31 addons-286100 kubelet[2115]: I0501 02:17:31.913659    2115 scope.go:117] "RemoveContainer" containerID="48d6f50e61efc3a66d15c7f7f9a04ef79e2c913bcac75ed85f773486a7b995a5"
	May 01 02:17:31 addons-286100 kubelet[2115]: E0501 02:17:31.915292    2115 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-xh7x6_gadget(a8136f47-e4b0-4e6b-9c96-9caaae6baebd)\"" pod="gadget/gadget-xh7x6" podUID="a8136f47-e4b0-4e6b-9c96-9caaae6baebd"
	May 01 02:17:31 addons-286100 kubelet[2115]: I0501 02:17:31.938541    2115 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b9402e-c4da-463c-bfd2-aed7d4fe5cdb" path="/var/lib/kubelet/pods/57b9402e-c4da-463c-bfd2-aed7d4fe5cdb/volumes"
	
	
	==> storage-provisioner [3c7a52baeb73] <==
	I0501 02:13:40.425414       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:13:40.556526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:13:40.556719       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 02:13:40.624645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 02:13:40.624820       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-286100_0f658667-6b5f-46ad-b85c-0f3ff8503789!
	I0501 02:13:40.653711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bec504d7-a666-43c6-b601-a95380dcee03", APIVersion:"v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-286100_0f658667-6b5f-46ad-b85c-0f3ff8503789 became leader
	I0501 02:13:40.725026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-286100_0f658667-6b5f-46ad-b85c-0f3ff8503789!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:17:22.497749    3260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-286100 -n addons-286100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-286100 -n addons-286100: (13.703056s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-286100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-qh886 ingress-nginx-admission-patch-82sx4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-286100 describe pod ingress-nginx-admission-create-qh886 ingress-nginx-admission-patch-82sx4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-286100 describe pod ingress-nginx-admission-create-qh886 ingress-nginx-admission-patch-82sx4: exit status 1 (196.4627ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qh886" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-82sx4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-286100 describe pod ingress-nginx-admission-create-qh886 ingress-nginx-admission-patch-82sx4: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.85s)

                                                
                                    
x
+
TestErrorSpam/setup (202.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-085300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 --driver=hyperv
E0501 02:21:34.952150   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:34.967600   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:34.983502   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:35.015101   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:35.061069   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:35.156489   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:35.331494   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:35.663523   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:36.318861   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:37.600105   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:40.172567   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:45.307540   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:21:55.561371   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:22:16.052437   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:22:57.019689   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:24:18.949496   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-085300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 --driver=hyperv: (3m22.473162s)
error_spam_test.go:96: unexpected stderr: "W0501 02:21:16.068877   12952 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-085300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18779
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-085300" primary control-plane node in "nospam-085300" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-085300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0501 02:21:16.068877   12952 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (202.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (34.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-869300 -n functional-869300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-869300 -n functional-869300: (12.2853667s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 logs -n 25: (8.7275626s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:25 UTC | 01 May 24 02:25 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:25 UTC | 01 May 24 02:26 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:26 UTC | 01 May 24 02:26 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:26 UTC | 01 May 24 02:26 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:26 UTC | 01 May 24 02:26 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:26 UTC | 01 May 24 02:27 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-085300 --log_dir                                     | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:27 UTC | 01 May 24 02:27 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-085300                                            | nospam-085300     | minikube6\jenkins | v1.33.0 | 01 May 24 02:27 UTC | 01 May 24 02:27 UTC |
	| start   | -p functional-869300                                        | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:27 UTC | 01 May 24 02:31 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-869300                                        | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:31 UTC | 01 May 24 02:33 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-869300 cache add                                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:33 UTC | 01 May 24 02:33 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-869300 cache add                                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:33 UTC | 01 May 24 02:34 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-869300 cache add                                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-869300 cache add                                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	|         | minikube-local-cache-test:functional-869300                 |                   |                   |         |                     |                     |
	| cache   | functional-869300 cache delete                              | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	|         | minikube-local-cache-test:functional-869300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	| ssh     | functional-869300 ssh sudo                                  | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-869300                                           | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:34 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-869300 ssh                                       | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-869300 cache reload                              | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:34 UTC | 01 May 24 02:35 UTC |
	| ssh     | functional-869300 ssh                                       | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-869300 kubectl --                                | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:35 UTC | 01 May 24 02:35 UTC |
	|         | --context functional-869300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:31:39
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:31:39.782247    3644 out.go:291] Setting OutFile to fd 316 ...
	I0501 02:31:39.783247    3644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:31:39.783247    3644 out.go:304] Setting ErrFile to fd 304...
	I0501 02:31:39.783247    3644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:31:39.812768    3644 out.go:298] Setting JSON to false
	I0501 02:31:39.816754    3644 start.go:129] hostinfo: {"hostname":"minikube6","uptime":103754,"bootTime":1714426945,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:31:39.817333    3644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:31:39.823343    3644 out.go:177] * [functional-869300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:31:39.825737    3644 notify.go:220] Checking for updates...
	I0501 02:31:39.827700    3644 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:31:39.831708    3644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:31:39.834779    3644 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:31:39.837701    3644 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:31:39.839700    3644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:31:39.843703    3644 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:31:39.843703    3644 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:31:45.314642    3644 out.go:177] * Using the hyperv driver based on existing profile
	I0501 02:31:45.318559    3644 start.go:297] selected driver: hyperv
	I0501 02:31:45.318559    3644 start.go:901] validating driver "hyperv" against &{Name:functional-869300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-869300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.218.182 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:31:45.318720    3644 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:31:45.375810    3644 cni.go:84] Creating CNI manager for ""
	I0501 02:31:45.375810    3644 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:31:45.376164    3644 start.go:340] cluster config:
	{Name:functional-869300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-869300 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.218.182 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:31:45.376602    3644 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:31:45.380301    3644 out.go:177] * Starting "functional-869300" primary control-plane node in "functional-869300" cluster
	I0501 02:31:45.382958    3644 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:31:45.383518    3644 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:31:45.383518    3644 cache.go:56] Caching tarball of preloaded images
	I0501 02:31:45.383727    3644 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:31:45.383727    3644 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:31:45.383727    3644 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\config.json ...
	I0501 02:31:45.386140    3644 start.go:360] acquireMachinesLock for functional-869300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:31:45.386140    3644 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-869300"
	I0501 02:31:45.386140    3644 start.go:96] Skipping create...Using existing machine configuration
	I0501 02:31:45.386140    3644 fix.go:54] fixHost starting: 
	I0501 02:31:45.387222    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:31:48.180345    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:31:48.180345    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:31:48.180345    3644 fix.go:112] recreateIfNeeded on functional-869300: state=Running err=<nil>
	W0501 02:31:48.180466    3644 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 02:31:48.184110    3644 out.go:177] * Updating the running hyperv "functional-869300" VM ...
	I0501 02:31:48.186491    3644 machine.go:94] provisionDockerMachine start ...
	I0501 02:31:48.186572    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:31:50.359166    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:31:50.359634    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:31:50.359634    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:31:52.996426    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:31:52.997300    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:31:53.002579    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:53.003152    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:31:53.003152    3644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:31:53.141148    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-869300
	
	I0501 02:31:53.141212    3644 buildroot.go:166] provisioning hostname "functional-869300"
	I0501 02:31:53.141272    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:31:55.264202    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:31:55.264878    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:31:55.264989    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:31:57.872900    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:31:57.872900    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:31:57.879196    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:31:57.880019    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:31:57.880019    3644 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-869300 && echo "functional-869300" | sudo tee /etc/hostname
	I0501 02:31:58.035815    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-869300
	
	I0501 02:31:58.035815    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:00.164389    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:00.164389    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:00.164648    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:02.757332    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:02.757332    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:02.764109    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:02.764894    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:32:02.764894    3644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-869300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-869300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-869300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:32:02.898890    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:32:02.898890    3644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:32:02.898890    3644 buildroot.go:174] setting up certificates
	I0501 02:32:02.898890    3644 provision.go:84] configureAuth start
	I0501 02:32:02.898890    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:05.034002    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:05.034814    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:05.034814    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:07.650249    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:07.650249    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:07.650249    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:09.836380    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:09.836593    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:09.836593    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:12.413959    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:12.414082    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:12.414082    3644 provision.go:143] copyHostCerts
	I0501 02:32:12.414082    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:32:12.414617    3644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:32:12.414617    3644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:32:12.415172    3644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:32:12.416398    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:32:12.416666    3644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:32:12.416666    3644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:32:12.417117    3644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:32:12.417960    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:32:12.418127    3644 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:32:12.418127    3644 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:32:12.418811    3644 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:32:12.419424    3644 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-869300 san=[127.0.0.1 172.28.218.182 functional-869300 localhost minikube]
	I0501 02:32:12.765045    3644 provision.go:177] copyRemoteCerts
	I0501 02:32:12.779322    3644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:32:12.779772    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:14.923456    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:14.923456    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:14.923456    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:17.561249    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:17.562073    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:17.562209    3644 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
	I0501 02:32:17.669399    3644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8900409s)
	I0501 02:32:17.669399    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:32:17.669399    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:32:17.738054    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:32:17.739041    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 02:32:17.789054    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:32:17.789054    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:32:17.842886    3644 provision.go:87] duration metric: took 14.9438227s to configureAuth
	I0501 02:32:17.842886    3644 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:32:17.843242    3644 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:32:17.843242    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:20.022439    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:20.022439    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:20.022439    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:22.606341    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:22.606341    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:22.612922    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:22.613545    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:32:22.614079    3644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:32:22.746654    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:32:22.746732    3644 buildroot.go:70] root file system type: tmpfs
	I0501 02:32:22.746905    3644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:32:22.746905    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:24.876926    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:24.877225    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:24.877324    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:27.497088    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:27.497186    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:27.503074    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:27.503777    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:32:27.503777    3644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:32:27.652731    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:32:27.652731    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:29.809970    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:29.809970    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:29.810075    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:32.396084    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:32.396084    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:32.403702    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:32.404429    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:32:32.404429    3644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:32:32.554084    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:32:32.554084    3644 machine.go:97] duration metric: took 44.3672686s to provisionDockerMachine
	I0501 02:32:32.554084    3644 start.go:293] postStartSetup for "functional-869300" (driver="hyperv")
	I0501 02:32:32.554084    3644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:32:32.570045    3644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:32:32.570045    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:34.679097    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:34.679097    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:34.679969    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:37.224418    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:37.224665    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:37.225088    3644 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
	I0501 02:32:37.335548    3644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7654683s)
	I0501 02:32:37.349960    3644 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:32:37.357693    3644 command_runner.go:130] > NAME=Buildroot
	I0501 02:32:37.357693    3644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 02:32:37.357693    3644 command_runner.go:130] > ID=buildroot
	I0501 02:32:37.357693    3644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 02:32:37.357693    3644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 02:32:37.357693    3644 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:32:37.357693    3644 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:32:37.357693    3644 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:32:37.359185    3644 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:32:37.359185    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:32:37.360132    3644 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14288\hosts -> hosts in /etc/test/nested/copy/14288
	I0501 02:32:37.360212    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14288\hosts -> /etc/test/nested/copy/14288/hosts
	I0501 02:32:37.373750    3644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14288
	I0501 02:32:37.394176    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:32:37.444521    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14288\hosts --> /etc/test/nested/copy/14288/hosts (40 bytes)
	I0501 02:32:37.496180    3644 start.go:296] duration metric: took 4.9420607s for postStartSetup
	I0501 02:32:37.496315    3644 fix.go:56] duration metric: took 52.1097537s for fixHost
	I0501 02:32:37.496363    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:39.632950    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:39.633853    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:39.633853    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:42.225708    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:42.225708    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:42.236019    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:42.236895    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:32:42.236895    3644 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:32:42.373227    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714530762.375201390
	
	I0501 02:32:42.373316    3644 fix.go:216] guest clock: 1714530762.375201390
	I0501 02:32:42.373380    3644 fix.go:229] Guest: 2024-05-01 02:32:42.37520139 +0000 UTC Remote: 2024-05-01 02:32:37.496315 +0000 UTC m=+57.903574801 (delta=4.87888639s)
	I0501 02:32:42.373474    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:44.488663    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:44.488663    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:44.489187    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:47.086335    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:47.086487    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:47.092817    3644 main.go:141] libmachine: Using SSH client type: native
	I0501 02:32:47.092941    3644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.218.182 22 <nil> <nil>}
	I0501 02:32:47.092941    3644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714530762
	I0501 02:32:47.238494    3644 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:32:42 UTC 2024
	
	I0501 02:32:47.238494    3644 fix.go:236] clock set: Wed May  1 02:32:42 UTC 2024
	 (err=<nil>)
	I0501 02:32:47.238494    3644 start.go:83] releasing machines lock for "functional-869300", held for 1m1.8519018s
	I0501 02:32:47.239037    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:49.408696    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:49.409025    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:49.409106    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:52.035473    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:52.035473    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:52.039913    3644 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:32:52.039913    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:52.051932    3644 ssh_runner.go:195] Run: cat /version.json
	I0501 02:32:52.051932    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:32:54.294411    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:54.294649    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:54.294759    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:54.295009    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:32:54.295056    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:54.295056    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:32:57.023596    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:57.024263    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:57.025030    3644 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
	I0501 02:32:57.054741    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:32:57.054741    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:32:57.054741    3644 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
	I0501 02:32:57.121967    3644 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0501 02:32:57.122047    3644 ssh_runner.go:235] Completed: cat /version.json: (5.0700777s)
	I0501 02:32:57.136167    3644 ssh_runner.go:195] Run: systemctl --version
	I0501 02:32:57.199853    3644 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 02:32:57.199853    3644 command_runner.go:130] > systemd 252 (252)
	I0501 02:32:57.199853    3644 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1599017s)
	I0501 02:32:57.199969    3644 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0501 02:32:57.214386    3644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:32:57.224415    3644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0501 02:32:57.225719    3644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:32:57.240504    3644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:32:57.260999    3644 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 02:32:57.260999    3644 start.go:494] detecting cgroup driver to use...
	I0501 02:32:57.260999    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:32:57.303805    3644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 02:32:57.318474    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:32:57.356432    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:32:57.381068    3644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:32:57.394936    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:32:57.439372    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:32:57.474736    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:32:57.510101    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:32:57.549048    3644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:32:57.588466    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:32:57.630322    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:32:57.673134    3644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:32:57.708986    3644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:32:57.730715    3644 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 02:32:57.743749    3644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:32:57.780244    3644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:32:58.105557    3644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:32:58.149548    3644 start.go:494] detecting cgroup driver to use...
	I0501 02:32:58.162265    3644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:32:58.188269    3644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 02:32:58.189370    3644 command_runner.go:130] > [Unit]
	I0501 02:32:58.189447    3644 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 02:32:58.189447    3644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 02:32:58.189447    3644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 02:32:58.189507    3644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 02:32:58.189507    3644 command_runner.go:130] > StartLimitBurst=3
	I0501 02:32:58.189507    3644 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 02:32:58.189507    3644 command_runner.go:130] > [Service]
	I0501 02:32:58.189507    3644 command_runner.go:130] > Type=notify
	I0501 02:32:58.189507    3644 command_runner.go:130] > Restart=on-failure
	I0501 02:32:58.189566    3644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 02:32:58.189586    3644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 02:32:58.189586    3644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 02:32:58.189586    3644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 02:32:58.189586    3644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 02:32:58.189647    3644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 02:32:58.189647    3644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 02:32:58.189725    3644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 02:32:58.189725    3644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 02:32:58.189725    3644 command_runner.go:130] > ExecStart=
	I0501 02:32:58.189725    3644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 02:32:58.189792    3644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 02:32:58.189792    3644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 02:32:58.189822    3644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 02:32:58.189822    3644 command_runner.go:130] > LimitNOFILE=infinity
	I0501 02:32:58.189822    3644 command_runner.go:130] > LimitNPROC=infinity
	I0501 02:32:58.189822    3644 command_runner.go:130] > LimitCORE=infinity
	I0501 02:32:58.189822    3644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 02:32:58.189822    3644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 02:32:58.189893    3644 command_runner.go:130] > TasksMax=infinity
	I0501 02:32:58.189893    3644 command_runner.go:130] > TimeoutStartSec=0
	I0501 02:32:58.189893    3644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 02:32:58.189893    3644 command_runner.go:130] > Delegate=yes
	I0501 02:32:58.189893    3644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 02:32:58.189958    3644 command_runner.go:130] > KillMode=process
	I0501 02:32:58.189958    3644 command_runner.go:130] > [Install]
	I0501 02:32:58.189985    3644 command_runner.go:130] > WantedBy=multi-user.target
	I0501 02:32:58.204025    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:32:58.244821    3644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:32:58.300638    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:32:58.353323    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:32:58.381992    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:32:58.422810    3644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 02:32:58.437773    3644 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:32:58.444297    3644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 02:32:58.457208    3644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:32:58.481247    3644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:32:58.534818    3644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:32:58.836149    3644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:32:59.134537    3644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:32:59.134832    3644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:32:59.194427    3644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:32:59.506023    3644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:33:12.457594    3644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.951383s)
	I0501 02:33:12.472876    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:33:12.525040    3644 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0501 02:33:12.592519    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:33:12.643560    3644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:33:12.896342    3644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:33:13.157269    3644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:13.429512    3644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:33:13.479235    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:33:13.517116    3644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:13.766081    3644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:33:13.905650    3644 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:33:13.921117    3644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:33:13.930474    3644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0501 02:33:13.930623    3644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 02:33:13.930710    3644 command_runner.go:130] > Device: 0,22	Inode: 1511        Links: 1
	I0501 02:33:13.930710    3644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0501 02:33:13.930710    3644 command_runner.go:130] > Access: 2024-05-01 02:33:13.900331437 +0000
	I0501 02:33:13.930710    3644 command_runner.go:130] > Modify: 2024-05-01 02:33:13.799325015 +0000
	I0501 02:33:13.930710    3644 command_runner.go:130] > Change: 2024-05-01 02:33:13.804325333 +0000
	I0501 02:33:13.930710    3644 command_runner.go:130] >  Birth: -
	I0501 02:33:13.930710    3644 start.go:562] Will wait 60s for crictl version
	I0501 02:33:13.947040    3644 ssh_runner.go:195] Run: which crictl
	I0501 02:33:13.953611    3644 command_runner.go:130] > /usr/bin/crictl
	I0501 02:33:13.967871    3644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:33:14.027011    3644 command_runner.go:130] > Version:  0.1.0
	I0501 02:33:14.027011    3644 command_runner.go:130] > RuntimeName:  docker
	I0501 02:33:14.027011    3644 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0501 02:33:14.027011    3644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 02:33:14.027011    3644 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:33:14.038245    3644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:33:14.073309    3644 command_runner.go:130] > 26.0.2
	I0501 02:33:14.087579    3644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:33:14.119425    3644 command_runner.go:130] > 26.0.2
	I0501 02:33:14.125508    3644 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:33:14.125622    3644 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:33:14.129979    3644 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:33:14.129979    3644 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:33:14.129979    3644 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:33:14.129979    3644 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:33:14.133668    3644 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:33:14.133726    3644 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:33:14.149721    3644 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:33:14.156756    3644 command_runner.go:130] > 172.28.208.1	host.minikube.internal
	I0501 02:33:14.157143    3644 kubeadm.go:877] updating cluster {Name:functional-869300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.0 ClusterName:functional-869300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.218.182 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:33:14.157527    3644 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:33:14.169980    3644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:33:14.196696    3644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 02:33:14.196696    3644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 02:33:14.196696    3644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 02:33:14.196797    3644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 02:33:14.196797    3644 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 02:33:14.196797    3644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 02:33:14.196797    3644 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 02:33:14.196797    3644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:33:14.196872    3644 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:33:14.196872    3644 docker.go:615] Images already preloaded, skipping extraction
	I0501 02:33:14.208516    3644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:33:14.234438    3644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 02:33:14.234438    3644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 02:33:14.234438    3644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 02:33:14.234438    3644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 02:33:14.234438    3644 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 02:33:14.234438    3644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 02:33:14.234583    3644 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 02:33:14.234583    3644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:33:14.234583    3644 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:33:14.234671    3644 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:33:14.234671    3644 kubeadm.go:928] updating node { 172.28.218.182 8441 v1.30.0 docker true true} ...
	I0501 02:33:14.234863    3644 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-869300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.218.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-869300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:33:14.245871    3644 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:33:14.283323    3644 command_runner.go:130] > cgroupfs
	I0501 02:33:14.284885    3644 cni.go:84] Creating CNI manager for ""
	I0501 02:33:14.284885    3644 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:33:14.284965    3644 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:33:14.284965    3644 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.218.182 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-869300 NodeName:functional-869300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.218.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.218.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:33:14.285285    3644 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.218.182
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-869300"
	  kubeletExtraArgs:
	    node-ip: 172.28.218.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.218.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:33:14.299147    3644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:33:14.326881    3644 command_runner.go:130] > kubeadm
	I0501 02:33:14.326881    3644 command_runner.go:130] > kubectl
	I0501 02:33:14.326881    3644 command_runner.go:130] > kubelet
	I0501 02:33:14.326881    3644 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:33:14.341388    3644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:33:14.368559    3644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0501 02:33:14.407273    3644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:33:14.444422    3644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0501 02:33:14.499329    3644 ssh_runner.go:195] Run: grep 172.28.218.182	control-plane.minikube.internal$ /etc/hosts
	I0501 02:33:14.505576    3644 command_runner.go:130] > 172.28.218.182	control-plane.minikube.internal
	I0501 02:33:14.520614    3644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:14.793544    3644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:33:14.864910    3644 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300 for IP: 172.28.218.182
	I0501 02:33:14.864910    3644 certs.go:194] generating shared ca certs ...
	I0501 02:33:14.865171    3644 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:14.866019    3644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:33:14.866019    3644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:33:14.866668    3644 certs.go:256] generating profile certs ...
	I0501 02:33:14.867401    3644 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.key
	I0501 02:33:14.867973    3644 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\apiserver.key.0a0afa87
	I0501 02:33:14.867973    3644 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\proxy-client.key
	I0501 02:33:14.867973    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:33:14.868440    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:33:14.868440    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:33:14.868440    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:33:14.868440    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:33:14.868440    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:33:14.869438    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:33:14.869438    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:33:14.869438    3644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:33:14.869438    3644 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:33:14.870444    3644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:33:14.870444    3644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:33:14.870444    3644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:33:14.870444    3644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:33:14.871454    3644 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:33:14.871454    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:33:14.871454    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:14.871454    3644 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:33:14.872432    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:33:14.943186    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:33:14.998477    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:33:15.065534    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:33:15.129541    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:33:15.187199    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:33:15.253242    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:33:15.320437    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 02:33:15.381858    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:33:15.466895    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:33:15.541144    3644 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:33:15.618732    3644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:33:15.685222    3644 ssh_runner.go:195] Run: openssl version
	I0501 02:33:15.696187    3644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 02:33:15.710582    3644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:33:15.749413    3644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:33:15.762479    3644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:33:15.762575    3644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:33:15.777338    3644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:33:15.803569    3644 command_runner.go:130] > 3ec20f2e
	I0501 02:33:15.817736    3644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:33:15.902133    3644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:33:15.940479    3644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:15.949657    3644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:15.949657    3644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:15.969380    3644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:33:16.004016    3644 command_runner.go:130] > b5213941
	I0501 02:33:16.019313    3644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:33:16.067066    3644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:33:16.112325    3644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:33:16.122033    3644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:33:16.122033    3644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:33:16.135023    3644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:33:16.145318    3644 command_runner.go:130] > 51391683
	I0501 02:33:16.158936    3644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:33:16.224175    3644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:33:16.234223    3644 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:33:16.234223    3644 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0501 02:33:16.234223    3644 command_runner.go:130] > Device: 8,1	Inode: 4196178     Links: 1
	I0501 02:33:16.234223    3644 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 02:33:16.234223    3644 command_runner.go:130] > Access: 2024-05-01 02:30:30.496925525 +0000
	I0501 02:33:16.234223    3644 command_runner.go:130] > Modify: 2024-05-01 02:30:30.496925525 +0000
	I0501 02:33:16.234223    3644 command_runner.go:130] > Change: 2024-05-01 02:30:30.496925525 +0000
	I0501 02:33:16.234223    3644 command_runner.go:130] >  Birth: 2024-05-01 02:30:30.496925525 +0000
	I0501 02:33:16.247189    3644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 02:33:16.257079    3644 command_runner.go:130] > Certificate will not expire
	I0501 02:33:16.271539    3644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 02:33:16.295682    3644 command_runner.go:130] > Certificate will not expire
	I0501 02:33:16.312944    3644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 02:33:16.325164    3644 command_runner.go:130] > Certificate will not expire
	I0501 02:33:16.338891    3644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 02:33:16.352277    3644 command_runner.go:130] > Certificate will not expire
	I0501 02:33:16.366054    3644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 02:33:16.382555    3644 command_runner.go:130] > Certificate will not expire
	I0501 02:33:16.396023    3644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 02:33:16.405923    3644 command_runner.go:130] > Certificate will not expire
	I0501 02:33:16.407215    3644 kubeadm.go:391] StartCluster: {Name:functional-869300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:functional-869300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.218.182 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:33:16.418102    3644 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:33:16.468847    3644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:33:16.491433    3644 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0501 02:33:16.491433    3644 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0501 02:33:16.491433    3644 command_runner.go:130] > /var/lib/minikube/etcd:
	I0501 02:33:16.491433    3644 command_runner.go:130] > member
	W0501 02:33:16.491433    3644 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 02:33:16.491433    3644 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 02:33:16.491433    3644 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 02:33:16.504952    3644 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 02:33:16.523947    3644 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:33:16.526157    3644 kubeconfig.go:125] found "functional-869300" server: "https://172.28.218.182:8441"
	I0501 02:33:16.527657    3644 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:33:16.528451    3644 kapi.go:59] client config for functional-869300: &rest.Config{Host:"https://172.28.218.182:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-869300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-869300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:33:16.529928    3644 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:33:16.542701    3644 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 02:33:16.567273    3644 kubeadm.go:624] The running cluster does not require reconfiguration: 172.28.218.182
	I0501 02:33:16.567483    3644 kubeadm.go:1154] stopping kube-system containers ...
	I0501 02:33:16.578706    3644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:33:16.644758    3644 command_runner.go:130] > fc83dd83e08d
	I0501 02:33:16.644828    3644 command_runner.go:130] > afe40a950042
	I0501 02:33:16.644828    3644 command_runner.go:130] > 0c0b917d01a4
	I0501 02:33:16.644828    3644 command_runner.go:130] > 1bb8467492fe
	I0501 02:33:16.644828    3644 command_runner.go:130] > 31d057ffba21
	I0501 02:33:16.644890    3644 command_runner.go:130] > 1fd4d4d2e46c
	I0501 02:33:16.644890    3644 command_runner.go:130] > 6e9245fa440a
	I0501 02:33:16.644890    3644 command_runner.go:130] > 7799ac956b9b
	I0501 02:33:16.644890    3644 command_runner.go:130] > 66881136335a
	I0501 02:33:16.644890    3644 command_runner.go:130] > a66ca1e37bf9
	I0501 02:33:16.644890    3644 command_runner.go:130] > de12b941ee69
	I0501 02:33:16.644890    3644 command_runner.go:130] > bd94a4514432
	I0501 02:33:16.644950    3644 command_runner.go:130] > e62935991875
	I0501 02:33:16.644977    3644 command_runner.go:130] > b9fc9d4d708d
	I0501 02:33:16.644977    3644 command_runner.go:130] > 4efd0272fab8
	I0501 02:33:16.644977    3644 command_runner.go:130] > 1781e8a704cb
	I0501 02:33:16.644977    3644 command_runner.go:130] > dba9dd497a4e
	I0501 02:33:16.644977    3644 command_runner.go:130] > d2bb49b6f51f
	I0501 02:33:16.644977    3644 command_runner.go:130] > cf4c49299be7
	I0501 02:33:16.644977    3644 command_runner.go:130] > cbf619f6f456
	I0501 02:33:16.645045    3644 command_runner.go:130] > e4d90b1aa9d8
	I0501 02:33:16.645045    3644 command_runner.go:130] > 57ebc687c08b
	I0501 02:33:16.645045    3644 command_runner.go:130] > 06f71551ab53
	I0501 02:33:16.645045    3644 command_runner.go:130] > 45cac4a19fe3
	I0501 02:33:16.645045    3644 command_runner.go:130] > f9394332aa0b
	I0501 02:33:16.645045    3644 command_runner.go:130] > 5cf934eabf27
	I0501 02:33:16.645106    3644 command_runner.go:130] > 582abc4c7a70
	I0501 02:33:16.645106    3644 command_runner.go:130] > f33828e81c54
	I0501 02:33:16.648238    3644 docker.go:483] Stopping containers: [fc83dd83e08d afe40a950042 0c0b917d01a4 1bb8467492fe 31d057ffba21 1fd4d4d2e46c 6e9245fa440a 7799ac956b9b 66881136335a a66ca1e37bf9 de12b941ee69 bd94a4514432 e62935991875 b9fc9d4d708d 4efd0272fab8 1781e8a704cb dba9dd497a4e d2bb49b6f51f cf4c49299be7 cbf619f6f456 e4d90b1aa9d8 57ebc687c08b 06f71551ab53 45cac4a19fe3 f9394332aa0b 5cf934eabf27 582abc4c7a70 f33828e81c54]
	I0501 02:33:16.659133    3644 ssh_runner.go:195] Run: docker stop fc83dd83e08d afe40a950042 0c0b917d01a4 1bb8467492fe 31d057ffba21 1fd4d4d2e46c 6e9245fa440a 7799ac956b9b 66881136335a a66ca1e37bf9 de12b941ee69 bd94a4514432 e62935991875 b9fc9d4d708d 4efd0272fab8 1781e8a704cb dba9dd497a4e d2bb49b6f51f cf4c49299be7 cbf619f6f456 e4d90b1aa9d8 57ebc687c08b 06f71551ab53 45cac4a19fe3 f9394332aa0b 5cf934eabf27 582abc4c7a70 f33828e81c54
	I0501 02:33:17.743584    3644 command_runner.go:130] > fc83dd83e08d
	I0501 02:33:17.743584    3644 command_runner.go:130] > afe40a950042
	I0501 02:33:17.743584    3644 command_runner.go:130] > 0c0b917d01a4
	I0501 02:33:17.743584    3644 command_runner.go:130] > 1bb8467492fe
	I0501 02:33:17.743584    3644 command_runner.go:130] > 31d057ffba21
	I0501 02:33:17.743584    3644 command_runner.go:130] > 1fd4d4d2e46c
	I0501 02:33:17.743584    3644 command_runner.go:130] > 6e9245fa440a
	I0501 02:33:17.743584    3644 command_runner.go:130] > 7799ac956b9b
	I0501 02:33:17.743584    3644 command_runner.go:130] > 66881136335a
	I0501 02:33:17.743584    3644 command_runner.go:130] > a66ca1e37bf9
	I0501 02:33:17.743584    3644 command_runner.go:130] > de12b941ee69
	I0501 02:33:17.743584    3644 command_runner.go:130] > bd94a4514432
	I0501 02:33:17.743584    3644 command_runner.go:130] > e62935991875
	I0501 02:33:17.743584    3644 command_runner.go:130] > b9fc9d4d708d
	I0501 02:33:17.743584    3644 command_runner.go:130] > 4efd0272fab8
	I0501 02:33:17.743584    3644 command_runner.go:130] > 1781e8a704cb
	I0501 02:33:17.743584    3644 command_runner.go:130] > dba9dd497a4e
	I0501 02:33:17.743584    3644 command_runner.go:130] > d2bb49b6f51f
	I0501 02:33:17.743584    3644 command_runner.go:130] > cf4c49299be7
	I0501 02:33:17.743584    3644 command_runner.go:130] > cbf619f6f456
	I0501 02:33:17.743584    3644 command_runner.go:130] > e4d90b1aa9d8
	I0501 02:33:17.743584    3644 command_runner.go:130] > 57ebc687c08b
	I0501 02:33:17.743584    3644 command_runner.go:130] > 06f71551ab53
	I0501 02:33:17.743584    3644 command_runner.go:130] > 45cac4a19fe3
	I0501 02:33:17.743584    3644 command_runner.go:130] > f9394332aa0b
	I0501 02:33:17.743584    3644 command_runner.go:130] > 5cf934eabf27
	I0501 02:33:17.743584    3644 command_runner.go:130] > 582abc4c7a70
	I0501 02:33:17.743584    3644 command_runner.go:130] > f33828e81c54
	I0501 02:33:17.743584    3644 ssh_runner.go:235] Completed: docker stop fc83dd83e08d afe40a950042 0c0b917d01a4 1bb8467492fe 31d057ffba21 1fd4d4d2e46c 6e9245fa440a 7799ac956b9b 66881136335a a66ca1e37bf9 de12b941ee69 bd94a4514432 e62935991875 b9fc9d4d708d 4efd0272fab8 1781e8a704cb dba9dd497a4e d2bb49b6f51f cf4c49299be7 cbf619f6f456 e4d90b1aa9d8 57ebc687c08b 06f71551ab53 45cac4a19fe3 f9394332aa0b 5cf934eabf27 582abc4c7a70 f33828e81c54: (1.0844431s)
	I0501 02:33:17.756597    3644 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 02:33:17.863886    3644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:33:17.902895    3644 command_runner.go:130] > -rw------- 1 root root 5647 May  1 02:30 /etc/kubernetes/admin.conf
	I0501 02:33:17.902895    3644 command_runner.go:130] > -rw------- 1 root root 5654 May  1 02:30 /etc/kubernetes/controller-manager.conf
	I0501 02:33:17.902895    3644 command_runner.go:130] > -rw------- 1 root root 2007 May  1 02:30 /etc/kubernetes/kubelet.conf
	I0501 02:33:17.903012    3644 command_runner.go:130] > -rw------- 1 root root 5606 May  1 02:30 /etc/kubernetes/scheduler.conf
	I0501 02:33:17.903779    3644 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 May  1 02:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 May  1 02:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May  1 02:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May  1 02:30 /etc/kubernetes/scheduler.conf
	
	I0501 02:33:17.919488    3644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0501 02:33:17.950490    3644 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0501 02:33:17.964479    3644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0501 02:33:17.983503    3644 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0501 02:33:17.996499    3644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0501 02:33:18.023210    3644 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:33:18.036975    3644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:33:18.081168    3644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0501 02:33:18.102899    3644 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:33:18.119447    3644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:33:18.159440    3644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:33:18.186178    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:33:18.321278    3644 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0501 02:33:18.321356    3644 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0501 02:33:18.321435    3644 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0501 02:33:18.321482    3644 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 02:33:18.321482    3644 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 02:33:18.321482    3644 command_runner.go:130] > [certs] Using the existing "sa" key
	I0501 02:33:18.321694    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:33:20.412685    3644 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:33:20.412753    3644 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0501 02:33:20.412753    3644 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0501 02:33:20.412753    3644 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0501 02:33:20.412753    3644 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:33:20.412753    3644 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:33:20.412753    3644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.0910435s)
	I0501 02:33:20.412753    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:33:20.530105    3644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:33:20.530999    3644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:33:20.531540    3644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0501 02:33:20.811479    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:33:20.917404    3644 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:33:20.917482    3644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:33:20.917482    3644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:33:20.917482    3644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:33:20.917546    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:33:21.044377    3644 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:33:21.044912    3644 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:33:21.059858    3644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:21.565144    3644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:22.058563    3644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:22.572002    3644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:23.076091    3644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:23.104067    3644 command_runner.go:130] > 6110
	I0501 02:33:23.104168    3644 api_server.go:72] duration metric: took 2.0591808s to wait for apiserver process to appear ...
	I0501 02:33:23.104168    3644 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:33:23.104243    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:25.681639    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 02:33:25.682495    3644 api_server.go:103] status: https://172.28.218.182:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 02:33:25.682495    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:25.797702    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:33:25.797788    3644 api_server.go:103] status: https://172.28.218.182:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:33:26.106347    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:26.117355    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:33:26.117355    3644 api_server.go:103] status: https://172.28.218.182:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:33:26.612783    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:26.628572    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:33:26.628572    3644 api_server.go:103] status: https://172.28.218.182:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:33:27.104594    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:27.129363    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:33:27.129475    3644 api_server.go:103] status: https://172.28.218.182:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:33:27.611920    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:27.623118    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 200:
	ok
	I0501 02:33:27.623383    3644 round_trippers.go:463] GET https://172.28.218.182:8441/version
	I0501 02:33:27.623459    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:27.623459    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.623459    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:27.634938    3644 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:33:27.635926    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:27.635926    3644 round_trippers.go:580]     Content-Length: 263
	I0501 02:33:27.635926    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:27 GMT
	I0501 02:33:27.635926    3644 round_trippers.go:580]     Audit-Id: 3470d940-25a7-47fc-9341-fa11ea48816a
	I0501 02:33:27.635926    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:27.635926    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:27.635926    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:27.635926    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:27.635926    3644 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 02:33:27.635926    3644 api_server.go:141] control plane version: v1.30.0
	I0501 02:33:27.635926    3644 api_server.go:131] duration metric: took 4.5316885s to wait for apiserver health ...
	I0501 02:33:27.635926    3644 cni.go:84] Creating CNI manager for ""
	I0501 02:33:27.635926    3644 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:33:27.639934    3644 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:33:27.658942    3644 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:33:27.680045    3644 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:33:27.718730    3644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:33:27.720001    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:27.720001    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:27.720001    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.720001    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:27.740731    3644 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0501 02:33:27.741770    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:27.741770    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:27.741770    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:27.741770    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:27 GMT
	I0501 02:33:27.741770    3644 round_trippers.go:580]     Audit-Id: fa92e699-cdfd-4fd9-bd6f-4c6a96214c68
	I0501 02:33:27.741770    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:27.741770    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:27.743484    3644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"551","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52568 chars]
	I0501 02:33:27.748279    3644 system_pods.go:59] 7 kube-system pods found
	I0501 02:33:27.749228    3644 system_pods.go:61] "coredns-7db6d8ff4d-grgws" [2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 02:33:27.749228    3644 system_pods.go:61] "etcd-functional-869300" [92c3081c-f2d2-456b-b008-17e3a3fa0bca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 02:33:27.749228    3644 system_pods.go:61] "kube-apiserver-functional-869300" [26b992bd-47b9-458e-a683-a136e4e028eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 02:33:27.749228    3644 system_pods.go:61] "kube-controller-manager-functional-869300" [a58b04e9-38b0-4af3-821a-2a04476a138a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 02:33:27.749228    3644 system_pods.go:61] "kube-proxy-nm4lg" [0488ff0b-d57b-4955-9562-06da35c1d8c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0501 02:33:27.749228    3644 system_pods.go:61] "kube-scheduler-functional-869300" [f14921a5-1739-4cf2-a4ef-e06560da308a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 02:33:27.749228    3644 system_pods.go:61] "storage-provisioner" [3400f4a7-b325-4236-a464-0c0c871fd3b7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:33:27.749228    3644 system_pods.go:74] duration metric: took 29.4283ms to wait for pod list to return data ...
	I0501 02:33:27.749228    3644 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:33:27.749228    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes
	I0501 02:33:27.749228    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:27.749228    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:27.749228    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:27.753233    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:27.753233    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:27.753233    3644 round_trippers.go:580]     Audit-Id: 34ae59e2-76d3-4e8d-8d33-bbfe196e9d12
	I0501 02:33:27.753233    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:27.753233    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:27.753993    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:27.753993    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:27.753993    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:27 GMT
	I0501 02:33:27.756234    3644 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"599"},"items":[{"metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0501 02:33:27.758072    3644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:33:27.758072    3644 node_conditions.go:123] node cpu capacity is 2
	I0501 02:33:27.758072    3644 node_conditions.go:105] duration metric: took 8.844ms to run NodePressure ...
	I0501 02:33:27.758072    3644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:33:28.160987    3644 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0501 02:33:28.161569    3644 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0501 02:33:28.161638    3644 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 02:33:28.161807    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0501 02:33:28.161878    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:28.161878    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.161878    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:28.170003    3644 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:33:28.170003    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:28.170003    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:28.170003    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:28 GMT
	I0501 02:33:28.170003    3644 round_trippers.go:580]     Audit-Id: e6791dcb-baa9-46b8-ad3c-9ead31924f7f
	I0501 02:33:28.170003    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:28.170003    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:28.170003    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:28.170923    3644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"601"},"items":[{"metadata":{"name":"etcd-functional-869300","namespace":"kube-system","uid":"92c3081c-f2d2-456b-b008-17e3a3fa0bca","resourceVersion":"554","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.218.182:2379","kubernetes.io/config.hash":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.mirror":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.seen":"2024-05-01T02:30:43.476925196Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31738 chars]
	I0501 02:33:28.171927    3644 kubeadm.go:733] kubelet initialised
	I0501 02:33:28.171927    3644 kubeadm.go:734] duration metric: took 10.2883ms waiting for restarted kubelet to initialise ...
	I0501 02:33:28.171927    3644 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:28.171927    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:28.172925    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:28.172925    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:28.172925    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.176957    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:28.176957    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:28.176957    3644 round_trippers.go:580]     Audit-Id: 0cd489d7-5594-4874-94b7-7e0984e2d500
	I0501 02:33:28.176957    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:28.176957    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:28.176957    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:28.176957    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:28.176957    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:28 GMT
	I0501 02:33:28.177929    3644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"601"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"551","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52568 chars]
	I0501 02:33:28.180924    3644 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:28.180924    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:28.180924    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:28.180924    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.180924    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:28.198932    3644 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:33:28.199972    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:28.199972    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:28.199972    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:28.199972    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:28 GMT
	I0501 02:33:28.199972    3644 round_trippers.go:580]     Audit-Id: c36ef7d7-57a7-4df3-a6f4-cd3727801205
	I0501 02:33:28.199972    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:28.199972    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:28.200522    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"551","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0501 02:33:28.201508    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:28.201577    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:28.201577    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.201577    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:28.206004    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:28.206004    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:28.206004    3644 round_trippers.go:580]     Audit-Id: 43cb4214-f40a-4f81-820a-b85da39464a7
	I0501 02:33:28.206004    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:28.206277    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:28.206277    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:28.206277    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:28.206277    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:28 GMT
	I0501 02:33:28.206669    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:28.683953    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:28.683953    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:28.683953    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.683953    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:28.688537    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:28.688667    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:28.688667    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:28.688667    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:28.688667    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:28.688667    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:28.688667    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:28 GMT
	I0501 02:33:28.688667    3644 round_trippers.go:580]     Audit-Id: c5fe5b73-7350-487e-b678-608bdbc0892e
	I0501 02:33:28.688667    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:28.689709    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:28.689785    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:28.689785    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:28.689785    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:28.695247    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:28.695247    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:28.695247    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:28.695247    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:28.695247    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:28.695247    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:28 GMT
	I0501 02:33:28.695247    3644 round_trippers.go:580]     Audit-Id: 771e3474-94da-490b-8cd0-bc13e15db54e
	I0501 02:33:28.695247    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:28.695795    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:29.182396    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:29.182396    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:29.182474    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:29.182474    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:29.186743    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:29.187256    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:29.187256    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:29.187256    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:29.187256    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:29.187256    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:29.187256    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:29 GMT
	I0501 02:33:29.187256    3644 round_trippers.go:580]     Audit-Id: 4fb675b8-6881-48c9-9be1-fcbdfe5aaabd
	I0501 02:33:29.187484    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:29.188267    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:29.188350    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:29.188350    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:29.188350    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:29.191190    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:29.191190    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:29.191190    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:29 GMT
	I0501 02:33:29.192092    3644 round_trippers.go:580]     Audit-Id: 0dc20b0e-732d-4b5f-94c5-40db8d301635
	I0501 02:33:29.192092    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:29.192092    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:29.192092    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:29.192092    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:29.192457    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:29.684246    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:29.684341    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:29.684341    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:29.684341    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:29.692367    3644 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:33:29.692367    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:29.692367    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:29.692367    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:29.692367    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:29.692367    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:29 GMT
	I0501 02:33:29.692367    3644 round_trippers.go:580]     Audit-Id: ca2d1f38-f84c-49ca-bd4b-8577b8de467b
	I0501 02:33:29.692367    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:29.692367    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:29.693150    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:29.693150    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:29.693150    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:29.693150    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:29.696079    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:29.696079    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:29.696079    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:29.696079    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:29.696079    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:29 GMT
	I0501 02:33:29.696079    3644 round_trippers.go:580]     Audit-Id: c3e4e5bd-9857-4f70-b775-e0a882c22cbd
	I0501 02:33:29.696079    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:29.696079    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:29.697213    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:30.196329    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:30.196633    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:30.196633    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:30.196633    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:30.201117    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:30.201117    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:30.201117    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:30.201117    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:30.201117    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:30.201278    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:30.201278    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:30 GMT
	I0501 02:33:30.201278    3644 round_trippers.go:580]     Audit-Id: 389a59a1-37ef-4adc-96de-97761cb02dff
	I0501 02:33:30.201552    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:30.202581    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:30.202637    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:30.202637    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:30.202637    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:30.205577    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:30.205577    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:30.205577    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:30.205577    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:30.205577    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:30.205577    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:30 GMT
	I0501 02:33:30.205577    3644 round_trippers.go:580]     Audit-Id: a7b77135-819e-4405-b909-cd989cf000a6
	I0501 02:33:30.205577    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:30.206436    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:30.206911    3644 pod_ready.go:102] pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:30.695146    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:30.695146    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:30.695146    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:30.695146    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:30.699725    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:30.699725    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:30.699725    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:30.699725    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:30.699725    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:30.699725    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:30 GMT
	I0501 02:33:30.700253    3644 round_trippers.go:580]     Audit-Id: 0cc8a9a3-0aba-4c2e-8d5f-c8ddc44b372f
	I0501 02:33:30.700253    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:30.700580    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:30.701371    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:30.701371    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:30.701371    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:30.701430    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:30.703754    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:30.703754    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:30.703754    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:30.704451    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:30 GMT
	I0501 02:33:30.704451    3644 round_trippers.go:580]     Audit-Id: 85a55127-6b37-4149-ad9d-175163371941
	I0501 02:33:30.704451    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:30.704451    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:30.704451    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:30.705008    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:31.197143    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:31.197143    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:31.197205    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:31.197205    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:31.201694    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:31.202156    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:31.202156    3644 round_trippers.go:580]     Audit-Id: 733c6d7b-6f6c-4391-84d0-c08bedda9f1c
	I0501 02:33:31.202156    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:31.202156    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:31.202156    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:31.202156    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:31.202156    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:31 GMT
	I0501 02:33:31.203176    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:31.204069    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:31.204069    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:31.204069    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:31.204127    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:31.206699    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:31.206699    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:31.207102    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:31.207102    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:31.207102    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:31 GMT
	I0501 02:33:31.207102    3644 round_trippers.go:580]     Audit-Id: c1c6cba3-2db5-4534-a36b-2abc2356cf93
	I0501 02:33:31.207102    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:31.207102    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:31.207632    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:31.683840    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:31.683840    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:31.683932    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:31.683932    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:31.687731    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:31.687731    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:31.687731    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:31.687731    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:31.687731    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:31.687731    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:31 GMT
	I0501 02:33:31.687731    3644 round_trippers.go:580]     Audit-Id: e6c2f89e-8ba5-4c7d-89d4-f3dd12de88c8
	I0501 02:33:31.687731    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:31.688503    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:31.689494    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:31.689494    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:31.689494    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:31.689494    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:31.695155    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:31.695155    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:31.695155    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:31.695155    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:31.695155    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:31.695155    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:31 GMT
	I0501 02:33:31.695155    3644 round_trippers.go:580]     Audit-Id: fc74a19c-97db-403a-a6a1-6a736f6a1bc0
	I0501 02:33:31.695155    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:31.695921    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:32.185125    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:32.185224    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:32.185224    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:32.185224    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:32.188745    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:32.189774    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:32.189774    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:32.189774    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:32 GMT
	I0501 02:33:32.189774    3644 round_trippers.go:580]     Audit-Id: 0321a170-7213-40cd-953e-cb12afe0703a
	I0501 02:33:32.189774    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:32.189774    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:32.189774    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:32.189941    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:32.190887    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:32.190979    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:32.190979    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:32.190979    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:32.194358    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:32.194358    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:32.194358    3644 round_trippers.go:580]     Audit-Id: 4075fea2-b6d2-46ec-abcf-4b354a556231
	I0501 02:33:32.194358    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:32.194358    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:32.194358    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:32.194358    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:32.194358    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:32 GMT
	I0501 02:33:32.195082    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:32.684242    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:32.684242    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:32.684242    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:32.684242    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:32.687647    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:32.688125    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:32.688125    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:32.688125    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:32 GMT
	I0501 02:33:32.688125    3644 round_trippers.go:580]     Audit-Id: bc49c5f9-fb7d-4592-a6c0-c98128cf2147
	I0501 02:33:32.688125    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:32.688125    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:32.688200    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:32.688499    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:32.689281    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:32.689340    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:32.689340    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:32.689340    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:32.692777    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:32.693809    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:32.693809    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:32 GMT
	I0501 02:33:32.693809    3644 round_trippers.go:580]     Audit-Id: 50e2638b-0bc8-4a90-969b-3695831adb08
	I0501 02:33:32.693809    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:32.693809    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:32.693809    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:32.693906    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:32.694328    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:32.694896    3644 pod_ready.go:102] pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:33.183208    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:33.183208    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:33.183208    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:33.183208    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:33.187758    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:33.188580    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:33.188790    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:33.188852    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:33.188852    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:33 GMT
	I0501 02:33:33.188852    3644 round_trippers.go:580]     Audit-Id: d862441f-c1e7-4f46-80be-b4e80320be7e
	I0501 02:33:33.188852    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:33.188852    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:33.188852    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:33.189552    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:33.190094    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:33.190094    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:33.190094    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:33.192888    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:33.193293    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:33.193293    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:33.193293    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:33.193293    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:33.193293    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:33 GMT
	I0501 02:33:33.193293    3644 round_trippers.go:580]     Audit-Id: 95380927-b61f-4caa-8f9a-ca02aa1b4a71
	I0501 02:33:33.193293    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:33.193699    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:33.682764    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:33.682959    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:33.682959    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:33.682959    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:33.685818    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:33.686762    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:33.686762    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:33.686762    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:33.686762    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:33.686762    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:33.686762    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:33 GMT
	I0501 02:33:33.686762    3644 round_trippers.go:580]     Audit-Id: af112147-c310-4e4e-b1bf-e310d865a459
	I0501 02:33:33.687013    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:33.688014    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:33.688083    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:33.688083    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:33.688083    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:33.694281    3644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:33.694281    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:33.694403    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:33.694403    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:33.694403    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:33.694403    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:33 GMT
	I0501 02:33:33.694403    3644 round_trippers.go:580]     Audit-Id: 6f980659-d0a6-495f-9380-a1be4d0bb575
	I0501 02:33:33.694403    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:33.694403    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:34.182035    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:34.182113    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:34.182113    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:34.182113    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:34.186495    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:34.186495    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:34.186495    3644 round_trippers.go:580]     Audit-Id: 30b4a78c-ba05-4b4e-8bf5-7ed8d6c8522c
	I0501 02:33:34.186495    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:34.186495    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:34.186495    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:34.186495    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:34.186495    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:34 GMT
	I0501 02:33:34.187846    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:34.188763    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:34.188819    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:34.188819    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:34.188876    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:34.191637    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:34.191637    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:34.191637    3644 round_trippers.go:580]     Audit-Id: e8624cbb-026d-47bd-a3cf-d5d06f94fab0
	I0501 02:33:34.191637    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:34.191637    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:34.191637    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:34.191637    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:34.191637    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:34 GMT
	I0501 02:33:34.191895    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:34.681493    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:34.681821    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:34.681821    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:34.681821    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:34.686210    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:34.686210    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:34.686210    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:34.686210    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:34.686210    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:34.686210    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:34.686210    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:34 GMT
	I0501 02:33:34.686210    3644 round_trippers.go:580]     Audit-Id: 22971157-ca73-4082-9c92-e2563643a776
	I0501 02:33:34.687494    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:34.688434    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:34.688493    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:34.688493    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:34.688493    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:34.691427    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:34.691806    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:34.691806    3644 round_trippers.go:580]     Audit-Id: e6d75ef9-8380-42bd-999f-6ca1ea7f6ddd
	I0501 02:33:34.691806    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:34.691806    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:34.691806    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:34.691806    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:34.691806    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:34 GMT
	I0501 02:33:34.692318    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:35.192807    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:35.192807    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:35.192807    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:35.192807    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:35.200828    3644 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:33:35.200828    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:35.200828    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:35.200828    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:35.200915    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:35.200945    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:35.200945    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:35 GMT
	I0501 02:33:35.200987    3644 round_trippers.go:580]     Audit-Id: 986212ac-95d3-4950-ad8f-fc7c702f631d
	I0501 02:33:35.200987    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:35.202149    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:35.202201    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:35.202201    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:35.202201    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:35.205249    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:35.205249    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:35.205373    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:35.205373    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:35.205373    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:35.205373    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:35 GMT
	I0501 02:33:35.205373    3644 round_trippers.go:580]     Audit-Id: 10b57018-4464-4e67-9538-9f6946085f48
	I0501 02:33:35.205373    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:35.205510    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:35.206044    3644 pod_ready.go:102] pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace has status "Ready":"False"
	I0501 02:33:35.694425    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:35.694489    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:35.694489    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:35.694489    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:35.697385    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:35.697385    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:35.698137    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:35.698137    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:35 GMT
	I0501 02:33:35.698137    3644 round_trippers.go:580]     Audit-Id: 1e5280f8-b617-4136-bab6-3610827f9e7f
	I0501 02:33:35.698137    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:35.698137    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:35.698137    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:35.698527    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"603","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0501 02:33:35.699547    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:35.699625    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:35.699625    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:35.699720    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:35.701385    3644 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:33:35.701385    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:35.701385    3644 round_trippers.go:580]     Audit-Id: 690950d6-eb10-4424-a344-ef413a672ec7
	I0501 02:33:35.701385    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:35.701385    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:35.701385    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:35.701385    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:35.701385    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:35 GMT
	I0501 02:33:35.703294    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.191884    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:36.191948    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.191948    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.191948    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.198536    3644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:36.198536    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.198536    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.198536    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.198536    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.198536    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.198536    3644 round_trippers.go:580]     Audit-Id: ffcfdb05-b7ee-46c5-9949-d44cbcbf2647
	I0501 02:33:36.198536    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.198536    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"605","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0501 02:33:36.199236    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.199236    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.199236    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.200011    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.204107    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:36.204188    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.204188    3644 round_trippers.go:580]     Audit-Id: ff496182-649b-42b0-a28d-cee14656756d
	I0501 02:33:36.204188    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.204188    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.204188    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.204188    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.204188    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.204621    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.204949    3644 pod_ready.go:92] pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:36.204949    3644 pod_ready.go:81] duration metric: took 8.0239663s for pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.204949    3644 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.205235    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/etcd-functional-869300
	I0501 02:33:36.205235    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.205235    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.205235    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.208857    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:36.208857    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.208857    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.208857    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.208857    3644 round_trippers.go:580]     Audit-Id: 3ec2b320-c959-4c45-b6bb-0743b3b5101c
	I0501 02:33:36.208857    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.209800    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.209800    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.209800    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-869300","namespace":"kube-system","uid":"92c3081c-f2d2-456b-b008-17e3a3fa0bca","resourceVersion":"554","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.218.182:2379","kubernetes.io/config.hash":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.mirror":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.seen":"2024-05-01T02:30:43.476925196Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6856 chars]
	I0501 02:33:36.210402    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.210402    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.210402    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.210402    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.213088    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:36.214077    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.214077    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.214120    3644 round_trippers.go:580]     Audit-Id: 16618e85-0698-40e5-be38-2ca50deda23c
	I0501 02:33:36.214120    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.214120    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.214120    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.214120    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.214485    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.712026    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/etcd-functional-869300
	I0501 02:33:36.712135    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.712135    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.712135    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.715909    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:36.715909    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.716832    3644 round_trippers.go:580]     Audit-Id: f57adcdd-07cf-4a28-934b-f2f4bd16be09
	I0501 02:33:36.716832    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.716832    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.716832    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.716832    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.716832    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.717342    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-869300","namespace":"kube-system","uid":"92c3081c-f2d2-456b-b008-17e3a3fa0bca","resourceVersion":"611","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.218.182:2379","kubernetes.io/config.hash":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.mirror":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.seen":"2024-05-01T02:30:43.476925196Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6632 chars]
	I0501 02:33:36.718090    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.718090    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.718090    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.718090    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.732419    3644 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:33:36.732530    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.732530    3644 round_trippers.go:580]     Audit-Id: 7e0935a6-a394-4414-8752-e6b627a48ef5
	I0501 02:33:36.732530    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.732530    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.732530    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.732530    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.732530    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.732530    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.733151    3644 pod_ready.go:92] pod "etcd-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:36.733151    3644 pod_ready.go:81] duration metric: took 528.1976ms for pod "etcd-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.733151    3644 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.733151    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-869300
	I0501 02:33:36.733151    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.733151    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.733151    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.737016    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:36.737016    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.737016    3644 round_trippers.go:580]     Audit-Id: 6b190a2d-5e40-437b-80ed-019c0e7727ff
	I0501 02:33:36.737016    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.737016    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.737236    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.737236    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.737236    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.737444    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-869300","namespace":"kube-system","uid":"26b992bd-47b9-458e-a683-a136e4e028eb","resourceVersion":"609","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.218.182:8441","kubernetes.io/config.hash":"27af19167b285ef6181e665baa905d37","kubernetes.io/config.mirror":"27af19167b285ef6181e665baa905d37","kubernetes.io/config.seen":"2024-05-01T02:30:43.476931096Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8164 chars]
	I0501 02:33:36.737587    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.737587    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.737587    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.737587    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.741332    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:36.741570    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.741570    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.741570    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.741570    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.741570    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.741570    3644 round_trippers.go:580]     Audit-Id: 8ffce4dd-d474-48c1-b689-37559b1c6d3e
	I0501 02:33:36.741570    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.741570    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.742295    3644 pod_ready.go:92] pod "kube-apiserver-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:36.742295    3644 pod_ready.go:81] duration metric: took 9.1446ms for pod "kube-apiserver-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.742361    3644 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.742423    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-869300
	I0501 02:33:36.742423    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.742423    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.742423    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.745178    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:36.745331    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.745331    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.745331    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.745331    3644 round_trippers.go:580]     Audit-Id: 085939c6-7048-4ea9-9c6e-c5ac626115f8
	I0501 02:33:36.745331    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.745331    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.745331    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.745690    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-869300","namespace":"kube-system","uid":"a58b04e9-38b0-4af3-821a-2a04476a138a","resourceVersion":"612","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c26912a54b88fc52ff618e8e6dde640e","kubernetes.io/config.mirror":"c26912a54b88fc52ff618e8e6dde640e","kubernetes.io/config.seen":"2024-05-01T02:30:43.506577613Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0501 02:33:36.746309    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.746309    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.746309    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.746309    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.749990    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:36.750032    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.750032    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.750032    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.750032    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.750032    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.750032    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.750032    3644 round_trippers.go:580]     Audit-Id: 66dd12f4-2d09-44c4-8826-e2951fb49b96
	I0501 02:33:36.750032    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.750615    3644 pod_ready.go:92] pod "kube-controller-manager-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:36.750615    3644 pod_ready.go:81] duration metric: took 8.2542ms for pod "kube-controller-manager-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.750615    3644 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nm4lg" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.750828    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-proxy-nm4lg
	I0501 02:33:36.750884    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.750884    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.750884    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.757486    3644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:36.757486    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.757486    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.757486    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.757486    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.757486    3644 round_trippers.go:580]     Audit-Id: 753de0dd-82fc-4c28-a195-175447c92d12
	I0501 02:33:36.757486    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.757486    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.758045    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nm4lg","generateName":"kube-proxy-","namespace":"kube-system","uid":"0488ff0b-d57b-4955-9562-06da35c1d8c2","resourceVersion":"604","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d3bacff7-4263-4acf-804e-f9c2c107bcda","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3bacff7-4263-4acf-804e-f9c2c107bcda\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6292 chars]
	I0501 02:33:36.758242    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.758242    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.758242    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.758242    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.761510    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:36.761510    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.761510    3644 round_trippers.go:580]     Audit-Id: eb64413c-1ac1-4add-8960-09aa1c3813f2
	I0501 02:33:36.761510    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.761510    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.761510    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.761510    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.761510    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.762051    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:36.762569    3644 pod_ready.go:92] pod "kube-proxy-nm4lg" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:36.762654    3644 pod_ready.go:81] duration metric: took 12.0387ms for pod "kube-proxy-nm4lg" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.762654    3644 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:36.805987    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-869300
	I0501 02:33:36.805987    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.805987    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.805987    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:36.809556    3644 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:33:36.809600    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:36.809600    3644 round_trippers.go:580]     Audit-Id: cd56b062-5f56-49cd-96c0-a2b9992ac701
	I0501 02:33:36.809600    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:36.809600    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:36.809600    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:36.809600    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:36.809600    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:36 GMT
	I0501 02:33:36.809747    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-869300","namespace":"kube-system","uid":"f14921a5-1739-4cf2-a4ef-e06560da308a","resourceVersion":"559","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.mirror":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.seen":"2024-05-01T02:30:35.227585854Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5703 chars]
	I0501 02:33:36.993703    3644 request.go:629] Waited for 183.2689ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.993703    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:36.994006    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:36.994079    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:36.994165    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:37.000374    3644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:37.000374    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:37.000374    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:37.000374    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:37.000374    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:37 GMT
	I0501 02:33:37.000374    3644 round_trippers.go:580]     Audit-Id: 5d31c585-3b7c-475c-9d72-161481f99d95
	I0501 02:33:37.000374    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:37.000496    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:37.000929    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:37.274073    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-869300
	I0501 02:33:37.274073    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:37.274073    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:37.274073    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:37.282038    3644 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:33:37.282038    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:37.282038    3644 round_trippers.go:580]     Audit-Id: 95b197d7-d07c-4bb5-b545-41d7271d7672
	I0501 02:33:37.282038    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:37.282038    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:37.282038    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:37.282038    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:37.282980    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:37 GMT
	I0501 02:33:37.282980    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-869300","namespace":"kube-system","uid":"f14921a5-1739-4cf2-a4ef-e06560da308a","resourceVersion":"559","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.mirror":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.seen":"2024-05-01T02:30:35.227585854Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5703 chars]
	I0501 02:33:37.400376    3644 request.go:629] Waited for 117.2274ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:37.400626    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:37.400626    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:37.400626    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:37.400626    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:37.404749    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:37.404749    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:37.404749    3644 round_trippers.go:580]     Audit-Id: 2a15bf0b-08df-497a-bf35-9fd5016ab972
	I0501 02:33:37.404749    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:37.404749    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:37.404749    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:37.404749    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:37.404749    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:37 GMT
	I0501 02:33:37.405139    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:37.771663    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-869300
	I0501 02:33:37.771899    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:37.771899    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:37.771899    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:37.778973    3644 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:33:37.778973    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:37.778973    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:37.778973    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:37.778973    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:37 GMT
	I0501 02:33:37.778973    3644 round_trippers.go:580]     Audit-Id: 8fe496e1-1d37-41ea-ac71-a40f23be6e42
	I0501 02:33:37.778973    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:37.778973    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:37.778973    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-869300","namespace":"kube-system","uid":"f14921a5-1739-4cf2-a4ef-e06560da308a","resourceVersion":"614","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.mirror":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.seen":"2024-05-01T02:30:35.227585854Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5459 chars]
	I0501 02:33:37.802613    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:37.802613    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:37.802613    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:37.802746    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:37.806373    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:37.806373    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:37.806373    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:37.806373    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:37.806373    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:37.806373    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:37.806373    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:37 GMT
	I0501 02:33:37.806373    3644 round_trippers.go:580]     Audit-Id: a3157b4b-4160-4e5d-979f-71158e63fc9c
	I0501 02:33:37.806373    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:37.807069    3644 pod_ready.go:92] pod "kube-scheduler-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:37.807127    3644 pod_ready.go:81] duration metric: took 1.0444648s for pod "kube-scheduler-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:37.807127    3644 pod_ready.go:38] duration metric: took 9.6351296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:37.807127    3644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:33:37.827154    3644 command_runner.go:130] > -16
	I0501 02:33:37.827154    3644 ops.go:34] apiserver oom_adj: -16
	I0501 02:33:37.827154    3644 kubeadm.go:591] duration metric: took 21.3355649s to restartPrimaryControlPlane
	I0501 02:33:37.827154    3644 kubeadm.go:393] duration metric: took 21.4198409s to StartCluster
	I0501 02:33:37.827154    3644 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:37.827154    3644 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:33:37.829151    3644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:33:37.830630    3644 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.218.182 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:33:37.830630    3644 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:33:37.835041    3644 out.go:177] * Verifying Kubernetes components...
	I0501 02:33:37.830764    3644 addons.go:69] Setting storage-provisioner=true in profile "functional-869300"
	I0501 02:33:37.830764    3644 addons.go:69] Setting default-storageclass=true in profile "functional-869300"
	I0501 02:33:37.831054    3644 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:33:37.838040    3644 addons.go:234] Setting addon storage-provisioner=true in "functional-869300"
	W0501 02:33:37.838040    3644 addons.go:243] addon storage-provisioner should already be in state true
	I0501 02:33:37.838040    3644 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-869300"
	I0501 02:33:37.838198    3644 host.go:66] Checking if "functional-869300" exists ...
	I0501 02:33:37.838344    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:33:37.839263    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:33:37.852767    3644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:33:38.195446    3644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:33:38.228910    3644 node_ready.go:35] waiting up to 6m0s for node "functional-869300" to be "Ready" ...
	I0501 02:33:38.229164    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:38.229164    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:38.229259    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:38.229259    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:38.236640    3644 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:33:38.236640    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:38.236640    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:38.236640    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:38.236640    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:38.236640    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:38 GMT
	I0501 02:33:38.236640    3644 round_trippers.go:580]     Audit-Id: c5caa7cc-c1c7-4dbb-b896-0d56b2cec0db
	I0501 02:33:38.236640    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:38.237489    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:38.238122    3644 node_ready.go:49] node "functional-869300" has status "Ready":"True"
	I0501 02:33:38.238122    3644 node_ready.go:38] duration metric: took 9.1517ms for node "functional-869300" to be "Ready" ...
	I0501 02:33:38.238122    3644 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:38.238122    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:38.238122    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:38.238122    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:38.238122    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:38.243453    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:38.243453    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:38.243453    3644 round_trippers.go:580]     Audit-Id: be89a9a7-a376-41df-b345-5221e5b9e010
	I0501 02:33:38.243453    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:38.243453    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:38.243453    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:38.243453    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:38.243453    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:38 GMT
	I0501 02:33:38.246454    3644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"615"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"605","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51143 chars]
	I0501 02:33:38.250291    3644 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:38.402042    3644 request.go:629] Waited for 151.6009ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:38.402280    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-grgws
	I0501 02:33:38.402386    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:38.402386    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:38.402386    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:38.407909    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:38.408855    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:38.408855    3644 round_trippers.go:580]     Audit-Id: 654e2a43-f894-4a58-9bae-e6edbc39051c
	I0501 02:33:38.408855    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:38.408914    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:38.408914    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:38.408962    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:38.408962    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:38 GMT
	I0501 02:33:38.409264    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"605","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0501 02:33:38.592047    3644 request.go:629] Waited for 181.6184ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:38.592047    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:38.592264    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:38.592264    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:38.592359    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:38.595892    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:38.595892    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:38.595892    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:38.595892    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:38.595892    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:38 GMT
	I0501 02:33:38.595892    3644 round_trippers.go:580]     Audit-Id: b048d454-60ce-4f0e-826b-82dbf19b7e5c
	I0501 02:33:38.595892    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:38.595892    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:38.596903    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:38.597236    3644 pod_ready.go:92] pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:38.597236    3644 pod_ready.go:81] duration metric: took 346.9429ms for pod "coredns-7db6d8ff4d-grgws" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:38.597236    3644 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:38.796982    3644 request.go:629] Waited for 199.5711ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/etcd-functional-869300
	I0501 02:33:38.797130    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/etcd-functional-869300
	I0501 02:33:38.797130    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:38.797130    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:38.797130    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:38.800736    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:38.800736    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:38.801481    3644 round_trippers.go:580]     Audit-Id: ea7442dd-9a71-4242-a1e7-271276c7a318
	I0501 02:33:38.801481    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:38.801481    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:38.801481    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:38.801481    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:38.801481    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:38 GMT
	I0501 02:33:38.801873    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-869300","namespace":"kube-system","uid":"92c3081c-f2d2-456b-b008-17e3a3fa0bca","resourceVersion":"611","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.218.182:2379","kubernetes.io/config.hash":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.mirror":"5e8bc183cc5ce96979868056f3c9b727","kubernetes.io/config.seen":"2024-05-01T02:30:43.476925196Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6632 chars]
	I0501 02:33:39.003577    3644 request.go:629] Waited for 200.434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:39.003666    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:39.003666    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:39.003666    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:39.003666    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:39.007390    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:39.008157    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:39.008157    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:39.008157    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:39 GMT
	I0501 02:33:39.008157    3644 round_trippers.go:580]     Audit-Id: b1dc93b9-3384-4df1-9cfc-122e6e842f8b
	I0501 02:33:39.008157    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:39.008157    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:39.008157    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:39.008336    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:39.008967    3644 pod_ready.go:92] pod "etcd-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:39.008967    3644 pod_ready.go:81] duration metric: took 411.7272ms for pod "etcd-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:39.008967    3644 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:39.194902    3644 request.go:629] Waited for 185.9336ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-869300
	I0501 02:33:39.195165    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-869300
	I0501 02:33:39.195165    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:39.195165    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:39.195165    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:39.201298    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:39.201298    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:39.201298    3644 round_trippers.go:580]     Audit-Id: b177d644-e541-4926-9395-56eaf30a60b1
	I0501 02:33:39.201298    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:39.201298    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:39.201298    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:39.201385    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:39.201385    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:39 GMT
	I0501 02:33:39.201385    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-869300","namespace":"kube-system","uid":"26b992bd-47b9-458e-a683-a136e4e028eb","resourceVersion":"609","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.218.182:8441","kubernetes.io/config.hash":"27af19167b285ef6181e665baa905d37","kubernetes.io/config.mirror":"27af19167b285ef6181e665baa905d37","kubernetes.io/config.seen":"2024-05-01T02:30:43.476931096Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8164 chars]
	I0501 02:33:39.401869    3644 request.go:629] Waited for 199.7732ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:39.402367    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:39.402367    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:39.402367    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:39.402367    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:39.405805    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:39.405805    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:39.406596    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:39.406596    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:39.406596    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:39.406596    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:39.406596    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:39 GMT
	I0501 02:33:39.406596    3644 round_trippers.go:580]     Audit-Id: 32e92e7b-50e4-4b23-9dc4-e0a62e8d6ca6
	I0501 02:33:39.406852    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:39.407351    3644 pod_ready.go:92] pod "kube-apiserver-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:39.407417    3644 pod_ready.go:81] duration metric: took 398.4476ms for pod "kube-apiserver-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:39.407417    3644 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:39.591980    3644 request.go:629] Waited for 184.4251ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-869300
	I0501 02:33:39.592109    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-869300
	I0501 02:33:39.592335    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:39.592335    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:39.592335    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:39.596561    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:39.596561    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:39.596561    3644 round_trippers.go:580]     Audit-Id: 3b9d6053-73bb-419b-b0ef-d5840a243cea
	I0501 02:33:39.596561    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:39.596561    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:39.596561    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:39.596561    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:39.596561    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:39 GMT
	I0501 02:33:39.597380    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-869300","namespace":"kube-system","uid":"a58b04e9-38b0-4af3-821a-2a04476a138a","resourceVersion":"612","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c26912a54b88fc52ff618e8e6dde640e","kubernetes.io/config.mirror":"c26912a54b88fc52ff618e8e6dde640e","kubernetes.io/config.seen":"2024-05-01T02:30:43.506577613Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0501 02:33:39.799107    3644 request.go:629] Waited for 201.0185ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:39.804079    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:39.804079    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:39.804169    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:39.804169    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:39.808942    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:39.809871    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:39.809871    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:39.809871    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:39.809871    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:39.809871    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:39.809871    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:39 GMT
	I0501 02:33:39.809871    3644 round_trippers.go:580]     Audit-Id: 057c7ccc-d7ae-47f6-a40b-fa7d48ec6a41
	I0501 02:33:39.810271    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:39.810746    3644 pod_ready.go:92] pod "kube-controller-manager-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:39.810829    3644 pod_ready.go:81] duration metric: took 403.4087ms for pod "kube-controller-manager-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:39.810829    3644 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nm4lg" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:40.004668    3644 request.go:629] Waited for 193.664ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-proxy-nm4lg
	I0501 02:33:40.004853    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-proxy-nm4lg
	I0501 02:33:40.004955    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.004955    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.004955    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:40.009555    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:40.009839    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:40.009839    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:40.009839    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:40.009839    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:40.009839    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:40.009839    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:40 GMT
	I0501 02:33:40.009839    3644 round_trippers.go:580]     Audit-Id: 70736ccf-18e6-4579-9c07-836fe759aab9
	I0501 02:33:40.013049    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nm4lg","generateName":"kube-proxy-","namespace":"kube-system","uid":"0488ff0b-d57b-4955-9562-06da35c1d8c2","resourceVersion":"604","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d3bacff7-4263-4acf-804e-f9c2c107bcda","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d3bacff7-4263-4acf-804e-f9c2c107bcda\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6292 chars]
	I0501 02:33:40.114042    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:33:40.114042    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:40.117074    3644 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:33:40.114899    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:33:40.119745    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:40.119745    3644 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:33:40.119745    3644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:33:40.119745    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:33:40.120586    3644 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:33:40.121306    3644 kapi.go:59] client config for functional-869300: &rest.Config{Host:"https://172.28.218.182:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-869300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-869300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:33:40.121614    3644 addons.go:234] Setting addon default-storageclass=true in "functional-869300"
	W0501 02:33:40.122187    3644 addons.go:243] addon default-storageclass should already be in state true
	I0501 02:33:40.122187    3644 host.go:66] Checking if "functional-869300" exists ...
	I0501 02:33:40.122348    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:33:40.192784    3644 request.go:629] Waited for 178.6947ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:40.193014    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:40.193014    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.193014    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.193014    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:40.197592    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:40.198590    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:40.198590    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:40 GMT
	I0501 02:33:40.198590    3644 round_trippers.go:580]     Audit-Id: ad65b71a-9c9c-438c-adb1-2b1d9bbc163e
	I0501 02:33:40.198590    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:40.198590    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:40.198590    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:40.198590    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:40.200076    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:40.200645    3644 pod_ready.go:92] pod "kube-proxy-nm4lg" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:40.200645    3644 pod_ready.go:81] duration metric: took 389.8129ms for pod "kube-proxy-nm4lg" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:40.200645    3644 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:40.400310    3644 request.go:629] Waited for 199.4638ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-869300
	I0501 02:33:40.400503    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-869300
	I0501 02:33:40.400503    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.400503    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:40.400503    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.410225    3644 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:33:40.410225    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:40.410225    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:40.410225    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:40.410225    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:40.410225    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:40.410225    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:40 GMT
	I0501 02:33:40.410225    3644 round_trippers.go:580]     Audit-Id: f7e2c52c-8a10-46bf-8bb5-13d5032f0c8f
	I0501 02:33:40.410225    3644 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-869300","namespace":"kube-system","uid":"f14921a5-1739-4cf2-a4ef-e06560da308a","resourceVersion":"614","creationTimestamp":"2024-05-01T02:30:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.mirror":"7a9496a7ae40ea1ca5f6a9272443601b","kubernetes.io/config.seen":"2024-05-01T02:30:35.227585854Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5459 chars]
	I0501 02:33:40.604293    3644 request.go:629] Waited for 192.4016ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:40.604638    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes/functional-869300
	I0501 02:33:40.604876    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.605160    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.605276    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:40.611549    3644 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:33:40.612097    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:40.612097    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:40.612097    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:40.612097    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:40 GMT
	I0501 02:33:40.612097    3644 round_trippers.go:580]     Audit-Id: 27836ddb-2eda-4912-9164-90e2b87b718c
	I0501 02:33:40.612097    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:40.612097    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:40.612621    3644 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-01T02:30:40Z","fieldsType":"FieldsV1", [truncated 4788 chars]
	I0501 02:33:40.613413    3644 pod_ready.go:92] pod "kube-scheduler-functional-869300" in "kube-system" namespace has status "Ready":"True"
	I0501 02:33:40.613489    3644 pod_ready.go:81] duration metric: took 412.841ms for pod "kube-scheduler-functional-869300" in "kube-system" namespace to be "Ready" ...
	I0501 02:33:40.613489    3644 pod_ready.go:38] duration metric: took 2.3753489s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:33:40.613780    3644 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:33:40.634814    3644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:33:40.664666    3644 command_runner.go:130] > 6110
	I0501 02:33:40.664666    3644 api_server.go:72] duration metric: took 2.8338819s to wait for apiserver process to appear ...
	I0501 02:33:40.664666    3644 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:33:40.664666    3644 api_server.go:253] Checking apiserver healthz at https://172.28.218.182:8441/healthz ...
	I0501 02:33:40.672336    3644 api_server.go:279] https://172.28.218.182:8441/healthz returned 200:
	ok
	I0501 02:33:40.673337    3644 round_trippers.go:463] GET https://172.28.218.182:8441/version
	I0501 02:33:40.673337    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.673337    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.673337    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:40.674340    3644 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:33:40.674340    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:40.674340    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:40.674340    3644 round_trippers.go:580]     Content-Length: 263
	I0501 02:33:40.674340    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:40 GMT
	I0501 02:33:40.674340    3644 round_trippers.go:580]     Audit-Id: 2e000d62-53c1-43cc-8387-a45201a41799
	I0501 02:33:40.674340    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:40.674340    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:40.674340    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:40.674340    3644 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 02:33:40.674340    3644 api_server.go:141] control plane version: v1.30.0
	I0501 02:33:40.675337    3644 api_server.go:131] duration metric: took 10.6708ms to wait for apiserver health ...
	I0501 02:33:40.675337    3644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:33:40.792513    3644 request.go:629] Waited for 117.1749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:40.792924    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:40.792924    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.792924    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.792924    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:40.798514    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:40.798514    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:40.798687    3644 round_trippers.go:580]     Audit-Id: 93f69e21-7e14-4a9b-a0b6-b407099b58bd
	I0501 02:33:40.798687    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:40.798687    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:40.798687    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:40.798687    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:40.798687    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:40 GMT
	I0501 02:33:40.800106    3644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"620"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"605","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51143 chars]
	I0501 02:33:40.802723    3644 system_pods.go:59] 7 kube-system pods found
	I0501 02:33:40.802828    3644 system_pods.go:61] "coredns-7db6d8ff4d-grgws" [2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7] Running
	I0501 02:33:40.802828    3644 system_pods.go:61] "etcd-functional-869300" [92c3081c-f2d2-456b-b008-17e3a3fa0bca] Running
	I0501 02:33:40.802828    3644 system_pods.go:61] "kube-apiserver-functional-869300" [26b992bd-47b9-458e-a683-a136e4e028eb] Running
	I0501 02:33:40.802828    3644 system_pods.go:61] "kube-controller-manager-functional-869300" [a58b04e9-38b0-4af3-821a-2a04476a138a] Running
	I0501 02:33:40.802828    3644 system_pods.go:61] "kube-proxy-nm4lg" [0488ff0b-d57b-4955-9562-06da35c1d8c2] Running
	I0501 02:33:40.802828    3644 system_pods.go:61] "kube-scheduler-functional-869300" [f14921a5-1739-4cf2-a4ef-e06560da308a] Running
	I0501 02:33:40.802828    3644 system_pods.go:61] "storage-provisioner" [3400f4a7-b325-4236-a464-0c0c871fd3b7] Running
	I0501 02:33:40.802828    3644 system_pods.go:74] duration metric: took 127.4904ms to wait for pod list to return data ...
	I0501 02:33:40.802828    3644 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:33:40.999298    3644 request.go:629] Waited for 196.4678ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/default/serviceaccounts
	I0501 02:33:40.999298    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/default/serviceaccounts
	I0501 02:33:40.999298    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:40.999298    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:40.999298    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:41.002875    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:41.003894    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:41.003894    3644 round_trippers.go:580]     Audit-Id: 7df5d571-3635-4a8b-97c3-5b49948f443a
	I0501 02:33:41.003894    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:41.003894    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:41.003974    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:41.003974    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:41.003974    3644 round_trippers.go:580]     Content-Length: 261
	I0501 02:33:41.003974    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:41 GMT
	I0501 02:33:41.003974    3644 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"620"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"583d3a47-9ea7-44a2-afa0-30f1f0592d98","resourceVersion":"354","creationTimestamp":"2024-05-01T02:30:57Z"}}]}
	I0501 02:33:41.004482    3644 default_sa.go:45] found service account: "default"
	I0501 02:33:41.004574    3644 default_sa.go:55] duration metric: took 201.6524ms for default service account to be created ...
	I0501 02:33:41.004574    3644 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:33:41.204311    3644 request.go:629] Waited for 199.6456ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:41.204311    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/namespaces/kube-system/pods
	I0501 02:33:41.204311    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:41.204311    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:41.204311    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:41.208900    3644 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:33:41.208900    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:41.208900    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:41.208900    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:41.208900    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:41.209437    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:41.209437    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:41 GMT
	I0501 02:33:41.209437    3644 round_trippers.go:580]     Audit-Id: 2567c412-5968-4ca4-ac37-8d0050b5876e
	I0501 02:33:41.210315    3644 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"620"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-grgws","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7","resourceVersion":"605","creationTimestamp":"2024-05-01T02:30:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"cb5d9ecf-889a-47fa-9682-5b3b356aab5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T02:30:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cb5d9ecf-889a-47fa-9682-5b3b356aab5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51143 chars]
	I0501 02:33:41.212973    3644 system_pods.go:86] 7 kube-system pods found
	I0501 02:33:41.212973    3644 system_pods.go:89] "coredns-7db6d8ff4d-grgws" [2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7] Running
	I0501 02:33:41.212973    3644 system_pods.go:89] "etcd-functional-869300" [92c3081c-f2d2-456b-b008-17e3a3fa0bca] Running
	I0501 02:33:41.212973    3644 system_pods.go:89] "kube-apiserver-functional-869300" [26b992bd-47b9-458e-a683-a136e4e028eb] Running
	I0501 02:33:41.212973    3644 system_pods.go:89] "kube-controller-manager-functional-869300" [a58b04e9-38b0-4af3-821a-2a04476a138a] Running
	I0501 02:33:41.212973    3644 system_pods.go:89] "kube-proxy-nm4lg" [0488ff0b-d57b-4955-9562-06da35c1d8c2] Running
	I0501 02:33:41.212973    3644 system_pods.go:89] "kube-scheduler-functional-869300" [f14921a5-1739-4cf2-a4ef-e06560da308a] Running
	I0501 02:33:41.212973    3644 system_pods.go:89] "storage-provisioner" [3400f4a7-b325-4236-a464-0c0c871fd3b7] Running
	I0501 02:33:41.212973    3644 system_pods.go:126] duration metric: took 208.3968ms to wait for k8s-apps to be running ...
	I0501 02:33:41.212973    3644 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:33:41.227155    3644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:33:41.259864    3644 system_svc.go:56] duration metric: took 46.8914ms WaitForService to wait for kubelet
	I0501 02:33:41.259864    3644 kubeadm.go:576] duration metric: took 3.4290758s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:33:41.259864    3644 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:33:41.393587    3644 request.go:629] Waited for 133.7216ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.218.182:8441/api/v1/nodes
	I0501 02:33:41.393904    3644 round_trippers.go:463] GET https://172.28.218.182:8441/api/v1/nodes
	I0501 02:33:41.393979    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:41.393979    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:41.393979    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:41.399634    3644 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:33:41.400125    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:41.400125    3644 round_trippers.go:580]     Audit-Id: 9fd97f07-6635-4a81-9335-abd1019b2f6f
	I0501 02:33:41.400125    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:41.400125    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:41.400125    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:41.400125    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:41.400125    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:41 GMT
	I0501 02:33:41.400622    3644 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"620"},"items":[{"metadata":{"name":"functional-869300","uid":"72db872b-6e69-4996-8f34-60721369e151","resourceVersion":"542","creationTimestamp":"2024-05-01T02:30:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-869300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"functional-869300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T02_30_44_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4841 chars]
	I0501 02:33:41.401193    3644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:33:41.401258    3644 node_conditions.go:123] node cpu capacity is 2
	I0501 02:33:41.401258    3644 node_conditions.go:105] duration metric: took 141.3926ms to run NodePressure ...
	I0501 02:33:41.401258    3644 start.go:240] waiting for startup goroutines ...
	I0501 02:33:42.359795    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:33:42.359948    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:33:42.359948    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:42.359948    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:42.360078    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:33:42.360078    3644 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:33:42.360078    3644 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:33:42.360078    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
	I0501 02:33:44.605584    3644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:33:44.605584    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:44.605584    3644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
	I0501 02:33:44.996616    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:33:44.997124    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:44.997757    3644 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
	I0501 02:33:45.141718    3644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:33:46.070687    3644 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0501 02:33:46.070791    3644 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0501 02:33:46.070791    3644 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0501 02:33:46.070791    3644 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0501 02:33:46.070791    3644 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0501 02:33:46.070791    3644 command_runner.go:130] > pod/storage-provisioner configured
	I0501 02:33:47.214028    3644 main.go:141] libmachine: [stdout =====>] : 172.28.218.182
	
	I0501 02:33:47.214028    3644 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:33:47.214740    3644 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
	I0501 02:33:47.358692    3644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:33:47.552315    3644 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0501 02:33:47.552664    3644 round_trippers.go:463] GET https://172.28.218.182:8441/apis/storage.k8s.io/v1/storageclasses
	I0501 02:33:47.552726    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:47.552726    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:47.552726    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:47.556497    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:47.556864    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:47.556864    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:47.556864    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:47.556864    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:47.556864    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:47.556864    3644 round_trippers.go:580]     Content-Length: 1273
	I0501 02:33:47.556921    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:47 GMT
	I0501 02:33:47.556921    3644 round_trippers.go:580]     Audit-Id: 20dcfd78-fb9e-4fcf-86ef-782809c4b320
	I0501 02:33:47.556983    3644 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"625"},"items":[{"metadata":{"name":"standard","uid":"8e50a743-5e4d-44ad-bef9-dbe03bf2ddd5","resourceVersion":"431","creationTimestamp":"2024-05-01T02:31:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-01T02:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0501 02:33:47.557775    3644 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8e50a743-5e4d-44ad-bef9-dbe03bf2ddd5","resourceVersion":"431","creationTimestamp":"2024-05-01T02:31:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-01T02:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0501 02:33:47.557871    3644 round_trippers.go:463] PUT https://172.28.218.182:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:33:47.557871    3644 round_trippers.go:469] Request Headers:
	I0501 02:33:47.557906    3644 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:33:47.557906    3644 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:33:47.557906    3644 round_trippers.go:473]     Content-Type: application/json
	I0501 02:33:47.561892    3644 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:33:47.562641    3644 round_trippers.go:577] Response Headers:
	I0501 02:33:47.562685    3644 round_trippers.go:580]     Audit-Id: a766b4d3-cf7f-4c38-a71b-d28f2238ad35
	I0501 02:33:47.562685    3644 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 02:33:47.562685    3644 round_trippers.go:580]     Content-Type: application/json
	I0501 02:33:47.562685    3644 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1716fdb4-f6e3-4a33-9d52-9d573288f692
	I0501 02:33:47.562685    3644 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3f3ed82-139f-43d8-8390-cc8117339f7d
	I0501 02:33:47.562685    3644 round_trippers.go:580]     Content-Length: 1220
	I0501 02:33:47.562685    3644 round_trippers.go:580]     Date: Wed, 01 May 2024 02:33:47 GMT
	I0501 02:33:47.562913    3644 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"8e50a743-5e4d-44ad-bef9-dbe03bf2ddd5","resourceVersion":"431","creationTimestamp":"2024-05-01T02:31:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-01T02:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0501 02:33:47.567502    3644 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:33:47.569913    3644 addons.go:505] duration metric: took 9.7392119s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:33:47.569913    3644 start.go:245] waiting for cluster config update ...
	I0501 02:33:47.569913    3644 start.go:254] writing updated cluster config ...
	I0501 02:33:47.586774    3644 ssh_runner.go:195] Run: rm -f paused
	I0501 02:33:47.752464    3644 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:33:47.755863    3644 out.go:177] * Done! kubectl is now configured to use "functional-869300" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.673033101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.673049101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.673184506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.683947013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.684005615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.684018916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:22 functional-869300 dockerd[4268]: time="2024-05-01T02:33:22.684104519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:25 functional-869300 cri-dockerd[4489]: time="2024-05-01T02:33:25Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.536716189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.536826792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.536841993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.537528413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.668407682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.668487185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.668500985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.668606988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.774724106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.775094317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.775417827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:26 functional-869300 dockerd[4268]: time="2024-05-01T02:33:26.776909272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:27 functional-869300 cri-dockerd[4489]: time="2024-05-01T02:33:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/361afafae69b8670150464136c5e16d3c09490330f4aed40315fbebd8a3b54df/resolv.conf as [nameserver 172.28.208.1]"
	May 01 02:33:27 functional-869300 dockerd[4268]: time="2024-05-01T02:33:27.495311352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:33:27 functional-869300 dockerd[4268]: time="2024-05-01T02:33:27.495563659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:33:27 functional-869300 dockerd[4268]: time="2024-05-01T02:33:27.495664762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:33:27 functional-869300 dockerd[4268]: time="2024-05-01T02:33:27.496116675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f52f10521addd       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   361afafae69b8       coredns-7db6d8ff4d-grgws
	9e36685e18369       a0bf559e280cf       2 minutes ago       Running             kube-proxy                2                   c99ff909b14bd       kube-proxy-nm4lg
	0a0020b36089b       6e38f40d628db       2 minutes ago       Running             storage-provisioner       2                   89ed0fa098eb1       storage-provisioner
	dfd50257f9ef7       c7aad43836fa5       2 minutes ago       Running             kube-controller-manager   2                   215cf87170592       kube-controller-manager-functional-869300
	5b74b7066b97e       c42f13656d0b2       2 minutes ago       Running             kube-apiserver            2                   974bd0fb6d68e       kube-apiserver-functional-869300
	ac9aea739009e       3861cfcd7c04c       2 minutes ago       Running             etcd                      2                   2dc362888686f       etcd-functional-869300
	4468327d70047       259c8277fcbbc       2 minutes ago       Running             kube-scheduler            2                   b8d1ef7662d27       kube-scheduler-functional-869300
	f8ff73edf1ac6       3861cfcd7c04c       2 minutes ago       Created             etcd                      1                   6e9245fa440a4       etcd-functional-869300
	2a6ef4551e626       c42f13656d0b2       2 minutes ago       Created             kube-apiserver            1                   31d057ffba21a       kube-apiserver-functional-869300
	fc83dd83e08d0       a0bf559e280cf       2 minutes ago       Created             kube-proxy                1                   7799ac956b9bd       kube-proxy-nm4lg
	afe40a9500425       259c8277fcbbc       2 minutes ago       Created             kube-scheduler            1                   a66ca1e37bf9c       kube-scheduler-functional-869300
	0c0b917d01a4c       c7aad43836fa5       2 minutes ago       Exited              kube-controller-manager   1                   66881136335a4       kube-controller-manager-functional-869300
	1bb8467492fed       6e38f40d628db       2 minutes ago       Exited              storage-provisioner       1                   de12b941ee693       storage-provisioner
	dba9dd497a4ee       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   cbf619f6f4569       coredns-7db6d8ff4d-grgws
	
	
	==> coredns [dba9dd497a4e] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[914183340]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:30:59.692) (total time: 30001ms):
	Trace[914183340]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (02:31:29.693)
	Trace[914183340]: [30.001001863s] [30.001001863s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2060821219]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:30:59.692) (total time: 30001ms):
	Trace[2060821219]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (02:31:29.693)
	Trace[2060821219]: [30.001197764s] [30.001197764s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1975374954]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 02:30:59.689) (total time: 30004ms):
	Trace[1975374954]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (02:31:29.694)
	Trace[1975374954]: [30.004762485s] [30.004762485s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f52f10521add] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52590 - 6308 "HINFO IN 6117631920194698684.6088156419421042247. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065928289s
	
	
	==> describe nodes <==
	Name:               functional-869300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-869300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=functional-869300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_30_44_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:30:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-869300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 02:35:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:35:28 +0000   Wed, 01 May 2024 02:30:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:35:28 +0000   Wed, 01 May 2024 02:30:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:35:28 +0000   Wed, 01 May 2024 02:30:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:35:28 +0000   Wed, 01 May 2024 02:30:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.218.182
	  Hostname:    functional-869300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b17f1d9c1234ef0b85d35870d5896e3
	  System UUID:                3dc46725-3d60-3949-af77-34b2ed0d7bd5
	  Boot ID:                    3fef61a9-d2e8-40dc-9a49-63e0d879d51f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-grgws                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m37s
	  kube-system                 etcd-functional-869300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-apiserver-functional-869300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-controller-manager-functional-869300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-nm4lg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-scheduler-functional-869300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m35s                  kube-proxy       
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node functional-869300 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node functional-869300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node functional-869300 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s                  kubelet          Node functional-869300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s                  kubelet          Node functional-869300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s                  kubelet          Node functional-869300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m46s                  kubelet          Node functional-869300 status is now: NodeReady
	  Normal  RegisteredNode           4m38s                  node-controller  Node functional-869300 event: Registered Node functional-869300 in Controller
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node functional-869300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node functional-869300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x7 over 2m13s)  kubelet          Node functional-869300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                   node-controller  Node functional-869300 event: Registered Node functional-869300 in Controller
	
	
	==> dmesg <==
	[  +0.569917] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.883872] systemd-fstab-generator[1724]: Ignoring "noauto" option for root device
	[  +0.111247] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.542857] systemd-fstab-generator[2132]: Ignoring "noauto" option for root device
	[  +0.161697] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.500536] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.210257] kauditd_printk_skb: 12 callbacks suppressed
	[May 1 02:31] kauditd_printk_skb: 89 callbacks suppressed
	[ +32.164841] kauditd_printk_skb: 10 callbacks suppressed
	[May 1 02:32] systemd-fstab-generator[3776]: Ignoring "noauto" option for root device
	[  +0.744400] systemd-fstab-generator[3831]: Ignoring "noauto" option for root device
	[  +0.306512] systemd-fstab-generator[3842]: Ignoring "noauto" option for root device
	[  +0.358081] systemd-fstab-generator[3856]: Ignoring "noauto" option for root device
	[May 1 02:33] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.061610] systemd-fstab-generator[4437]: Ignoring "noauto" option for root device
	[  +0.255895] systemd-fstab-generator[4449]: Ignoring "noauto" option for root device
	[  +0.250903] systemd-fstab-generator[4461]: Ignoring "noauto" option for root device
	[  +0.368771] systemd-fstab-generator[4476]: Ignoring "noauto" option for root device
	[  +1.006986] systemd-fstab-generator[4637]: Ignoring "noauto" option for root device
	[  +0.150675] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.861097] systemd-fstab-generator[5697]: Ignoring "noauto" option for root device
	[  +0.141085] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.892535] kauditd_printk_skb: 47 callbacks suppressed
	[ +11.318849] systemd-fstab-generator[6556]: Ignoring "noauto" option for root device
	[  +0.180917] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [ac9aea739009] <==
	{"level":"info","ts":"2024-05-01T02:33:23.101725Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:33:23.102052Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:33:23.103817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad switched to configuration voters=(11261994630334229677)"}
	{"level":"info","ts":"2024-05-01T02:33:23.107574Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5eb855952e201df2","local-member-id":"9c4aa4729b6dacad","added-peer-id":"9c4aa4729b6dacad","added-peer-peer-urls":["https://172.28.218.182:2380"]}
	{"level":"info","ts":"2024-05-01T02:33:23.107952Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5eb855952e201df2","local-member-id":"9c4aa4729b6dacad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:33:23.10859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:33:23.111963Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T02:33:23.114754Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9c4aa4729b6dacad","initial-advertise-peer-urls":["https://172.28.218.182:2380"],"listen-peer-urls":["https://172.28.218.182:2380"],"advertise-client-urls":["https://172.28.218.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.218.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T02:33:23.116407Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T02:33:23.111988Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.218.182:2380"}
	{"level":"info","ts":"2024-05-01T02:33:23.116719Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.218.182:2380"}
	{"level":"info","ts":"2024-05-01T02:33:24.011206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T02:33:24.011493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T02:33:24.011631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad received MsgPreVoteResp from 9c4aa4729b6dacad at term 2"}
	{"level":"info","ts":"2024-05-01T02:33:24.011791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:33:24.01193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad received MsgVoteResp from 9c4aa4729b6dacad at term 3"}
	{"level":"info","ts":"2024-05-01T02:33:24.012117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9c4aa4729b6dacad became leader at term 3"}
	{"level":"info","ts":"2024-05-01T02:33:24.012571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9c4aa4729b6dacad elected leader 9c4aa4729b6dacad at term 3"}
	{"level":"info","ts":"2024-05-01T02:33:24.021837Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9c4aa4729b6dacad","local-member-attributes":"{Name:functional-869300 ClientURLs:[https://172.28.218.182:2379]}","request-path":"/0/members/9c4aa4729b6dacad/attributes","cluster-id":"5eb855952e201df2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:33:24.021848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:33:24.021904Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:33:24.026071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:33:24.026419Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:33:24.028132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.218.182:2379"}
	{"level":"info","ts":"2024-05-01T02:33:24.030518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [f8ff73edf1ac] <==
	
	
	==> kernel <==
	 02:35:35 up 7 min,  0 users,  load average: 0.67, 0.73, 0.36
	Linux functional-869300 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2a6ef4551e62] <==
	
	
	==> kube-apiserver [5b74b7066b97] <==
	I0501 02:33:25.755791       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 02:33:25.755841       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 02:33:25.764562       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 02:33:25.773476       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 02:33:25.776002       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 02:33:25.776162       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 02:33:25.778024       1 aggregator.go:165] initial CRD sync complete...
	I0501 02:33:25.778060       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 02:33:25.778067       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 02:33:25.778073       1 cache.go:39] Caches are synced for autoregister controller
	I0501 02:33:25.789149       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 02:33:25.789186       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 02:33:25.825270       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 02:33:25.825307       1 policy_source.go:224] refreshing policies
	I0501 02:33:25.825614       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 02:33:25.894870       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 02:33:26.691030       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0501 02:33:27.742068       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.218.182]
	I0501 02:33:27.743710       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 02:33:27.949915       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 02:33:27.987346       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 02:33:28.063975       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 02:33:28.133999       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 02:33:28.145808       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 02:33:38.555939       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0c0b917d01a4] <==
	
	
	==> kube-controller-manager [dfd50257f9ef] <==
	I0501 02:33:38.567727       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 02:33:38.568258       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:33:38.571529       1 shared_informer.go:320] Caches are synced for node
	I0501 02:33:38.571930       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 02:33:38.572234       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 02:33:38.573001       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 02:33:38.573013       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 02:33:38.573238       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 02:33:38.574463       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 02:33:38.573668       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 02:33:38.580737       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 02:33:38.653127       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:33:38.679092       1 shared_informer.go:320] Caches are synced for taint
	I0501 02:33:38.679576       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 02:33:38.679812       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-869300"
	I0501 02:33:38.680042       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 02:33:38.699178       1 shared_informer.go:320] Caches are synced for deployment
	I0501 02:33:38.704206       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:33:38.740210       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:33:38.752613       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 02:33:38.752956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="132.001µs"
	I0501 02:33:38.780304       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:33:39.142312       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:33:39.142534       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 02:33:39.199974       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [9e36685e1836] <==
	I0501 02:33:27.046801       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:33:27.064885       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.218.182"]
	I0501 02:33:27.180343       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:33:27.182323       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:33:27.182390       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:33:27.198748       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:33:27.199063       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:33:27.199081       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:33:27.202434       1 config.go:192] "Starting service config controller"
	I0501 02:33:27.203303       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:33:27.203334       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:33:27.203341       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:33:27.203981       1 config.go:319] "Starting node config controller"
	I0501 02:33:27.203990       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:33:27.303447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:33:27.303847       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:33:27.304906       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fc83dd83e08d] <==
	
	
	==> kube-scheduler [4468327d7004] <==
	I0501 02:33:22.852207       1 serving.go:380] Generated self-signed cert in-memory
	I0501 02:33:25.822612       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 02:33:25.822718       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:33:25.832834       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 02:33:25.832995       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0501 02:33:25.833006       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0501 02:33:25.833027       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 02:33:25.838004       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 02:33:25.838047       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:33:25.838067       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0501 02:33:25.838076       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0501 02:33:25.935088       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 02:33:25.938995       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:33:25.939318       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kube-scheduler [afe40a950042] <==
	
	
	==> kubelet <==
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.933103    5704 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.934134    5704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.975244    5704 apiserver.go:52] "Watching apiserver"
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.979576    5704 topology_manager.go:215] "Topology Admit Handler" podUID="2cb6073b-581f-4a8c-ad4d-03ab7adb3bd7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-grgws"
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.981119    5704 topology_manager.go:215] "Topology Admit Handler" podUID="0488ff0b-d57b-4955-9562-06da35c1d8c2" podNamespace="kube-system" podName="kube-proxy-nm4lg"
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.981318    5704 topology_manager.go:215] "Topology Admit Handler" podUID="3400f4a7-b325-4236-a464-0c0c871fd3b7" podNamespace="kube-system" podName="storage-provisioner"
	May 01 02:33:25 functional-869300 kubelet[5704]: I0501 02:33:25.983029    5704 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 01 02:33:26 functional-869300 kubelet[5704]: I0501 02:33:26.008593    5704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0488ff0b-d57b-4955-9562-06da35c1d8c2-xtables-lock\") pod \"kube-proxy-nm4lg\" (UID: \"0488ff0b-d57b-4955-9562-06da35c1d8c2\") " pod="kube-system/kube-proxy-nm4lg"
	May 01 02:33:26 functional-869300 kubelet[5704]: I0501 02:33:26.008853    5704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0488ff0b-d57b-4955-9562-06da35c1d8c2-lib-modules\") pod \"kube-proxy-nm4lg\" (UID: \"0488ff0b-d57b-4955-9562-06da35c1d8c2\") " pod="kube-system/kube-proxy-nm4lg"
	May 01 02:33:26 functional-869300 kubelet[5704]: I0501 02:33:26.008932    5704 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3400f4a7-b325-4236-a464-0c0c871fd3b7-tmp\") pod \"storage-provisioner\" (UID: \"3400f4a7-b325-4236-a464-0c0c871fd3b7\") " pod="kube-system/storage-provisioner"
	May 01 02:33:26 functional-869300 kubelet[5704]: I0501 02:33:26.281722    5704 scope.go:117] "RemoveContainer" containerID="1bb8467492fed7c22ae0979ceb34b97c033383b53a0ce03a0dbc2957c3f4cb05"
	May 01 02:33:26 functional-869300 kubelet[5704]: I0501 02:33:26.291813    5704 scope.go:117] "RemoveContainer" containerID="fc83dd83e08d037fc950e16f2a73ca0b6ac4b1113a67f14bc1f576c16d7c9c61"
	May 01 02:33:27 functional-869300 kubelet[5704]: I0501 02:33:27.147844    5704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="361afafae69b8670150464136c5e16d3c09490330f4aed40315fbebd8a3b54df"
	May 01 02:33:29 functional-869300 kubelet[5704]: I0501 02:33:29.278513    5704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 01 02:33:35 functional-869300 kubelet[5704]: I0501 02:33:35.763034    5704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 01 02:34:21 functional-869300 kubelet[5704]: E0501 02:34:21.128819    5704 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:34:21 functional-869300 kubelet[5704]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:34:21 functional-869300 kubelet[5704]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:34:21 functional-869300 kubelet[5704]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:34:21 functional-869300 kubelet[5704]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:35:21 functional-869300 kubelet[5704]: E0501 02:35:21.127158    5704 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:35:21 functional-869300 kubelet[5704]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:35:21 functional-869300 kubelet[5704]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:35:21 functional-869300 kubelet[5704]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:35:21 functional-869300 kubelet[5704]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [0a0020b36089] <==
	I0501 02:33:26.770574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:33:26.809852       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:33:26.810928       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0501 02:33:44.231570       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0501 02:33:44.232323       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-869300_8f48d0e1-b2a5-404f-97e3-eecfc49c7e29!
	I0501 02:33:44.232028       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c545b2ce-228f-4edf-8b96-212e9ffa3b1f", APIVersion:"v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-869300_8f48d0e1-b2a5-404f-97e3-eecfc49c7e29 became leader
	I0501 02:33:44.333522       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-869300_8f48d0e1-b2a5-404f-97e3-eecfc49c7e29!
	
	
	==> storage-provisioner [1bb8467492fe] <==
	I0501 02:33:16.397751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 02:33:16.425885       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:35:27.004646    2452 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-869300 -n functional-869300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-869300 -n functional-869300: (12.268659s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-869300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (34.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-869300 config unset cpus" to be -""- but got *"W0501 02:38:37.715675   13800 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 config get cpus: exit status 14 (295.1861ms)

                                                
                                                
** stderr ** 
	W0501 02:38:38.051668   12732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-869300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0501 02:38:38.051668   12732 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-869300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0501 02:38:38.358146    1820 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-869300 config get cpus" to be -""- but got *"W0501 02:38:38.676972    5456 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-869300 config unset cpus" to be -""- but got *"W0501 02:38:38.981723    4012 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 config get cpus: exit status 14 (275.0251ms)

                                                
                                                
** stderr ** 
	W0501 02:38:39.280999    6172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-869300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0501 02:38:39.280999    6172 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 service --namespace=default --https --url hello-node: exit status 1 (15.0667116s)

                                                
                                                
** stderr ** 
	W0501 02:39:23.168444   12728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-869300 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 service hello-node --url --format={{.IP}}: exit status 1 (15.0182458s)

                                                
                                                
** stderr ** 
	W0501 02:39:38.381307   14204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-869300 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 service hello-node --url: exit status 1 (15.0508426s)

                                                
                                                
** stderr ** 
	W0501 02:39:53.251125    8212 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-869300 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- sh -c "ping -c 1 172.28.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- sh -c "ping -c 1 172.28.208.1": exit status 1 (10.5530683s)

                                                
                                                
-- stdout --
	PING 172.28.208.1 (172.28.208.1): 56 data bytes
	
	--- 172.28.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:59:31.523333   12208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.208.1) from pod (busybox-fc5497c4f-2gr4g): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-6mlkh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-6mlkh -- sh -c "ping -c 1 172.28.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-6mlkh -- sh -c "ping -c 1 172.28.208.1": exit status 1 (10.5875249s)

                                                
                                                
-- stdout --
	PING 172.28.208.1 (172.28.208.1): 56 data bytes
	
	--- 172.28.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:59:42.672359    9624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.208.1) from pod (busybox-fc5497c4f-6mlkh): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-pc6wt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-pc6wt -- sh -c "ping -c 1 172.28.208.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-pc6wt -- sh -c "ping -c 1 172.28.208.1": exit status 1 (10.5527549s)

                                                
                                                
-- stdout --
	PING 172.28.208.1 (172.28.208.1): 56 data bytes
	
	--- 172.28.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:59:53.824164    2924 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.208.1) from pod (busybox-fc5497c4f-pc6wt): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200: (12.4867412s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 logs -n 25: (9.0586626s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-869300                    | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-869300 image build -t     | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	|         | localhost/my-image:functional-869300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-869300 image ls           | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	| delete  | -p functional-869300                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:46 UTC | 01 May 24 02:47 UTC |
	| start   | -p ha-136200 --wait=true             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:47 UTC | 01 May 24 02:58 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- apply -f             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- rollout status       | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-2gr4g -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-6mlkh -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-pc6wt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:47:19
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:47:19.308853    4712 out.go:291] Setting OutFile to fd 968 ...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.308853    4712 out.go:304] Setting ErrFile to fd 940...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.335053    4712 out.go:298] Setting JSON to false
	I0501 02:47:19.338050    4712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104693,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:47:19.338050    4712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:47:19.343676    4712 out.go:177] * [ha-136200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:47:19.347056    4712 notify.go:220] Checking for updates...
	I0501 02:47:19.349570    4712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:47:19.352627    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:47:19.356010    4712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:47:19.359527    4712 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:47:19.364982    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:47:19.368342    4712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:47:24.771909    4712 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:47:24.777503    4712 start.go:297] selected driver: hyperv
	I0501 02:47:24.777503    4712 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:47:24.777503    4712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:47:24.830749    4712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:47:24.832155    4712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:47:24.832679    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:47:24.832679    4712 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:47:24.832679    4712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:47:24.832944    4712 start.go:340] cluster config:
	{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:47:24.832944    4712 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:47:24.837439    4712 out.go:177] * Starting "ha-136200" primary control-plane node in "ha-136200" cluster
	I0501 02:47:24.839631    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:47:24.839631    4712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:47:24.839631    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:47:24.840411    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:47:24.840411    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:47:24.841147    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:47:24.841147    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json: {Name:mk622c10e63d8ff69d285ce96c3e57bfbed6a54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:360] acquireMachinesLock for ha-136200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200"
	I0501 02:47:24.843334    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:47:24.843334    4712 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:47:24.845982    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:47:24.845982    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:47:24.845982    4712 client.go:168] LocalClient.Create starting
	I0501 02:47:24.847217    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.847705    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:47:24.848012    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.848048    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.848190    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:47:27.058462    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:47:27.058678    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:27.058786    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:30.441172    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:34.074968    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:34.075096    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:34.077782    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:47:34.612276    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: Creating VM...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:37.663991    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:37.664390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:37.664515    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:47:37.664599    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:39.498071    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:39.498234    4712 main.go:141] libmachine: Creating VHD
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:47:43.230384    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2B9E163F-083E-4714-9C44-9A52BE438E53
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:47:43.231369    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:43.231468    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:47:43.231550    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:47:43.241482    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -SizeBytes 20000MB
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:48.971981    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:47:52.766292    4712 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-136200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:47:52.766504    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:52.766592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200 -DynamicMemoryEnabled $false
	I0501 02:47:54.972628    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200 -Count 2
	I0501 02:47:57.167635    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\boot2docker.iso'
	I0501 02:47:59.728585    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd'
	I0501 02:48:02.387014    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: Starting VM...
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:07.690543    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:11.244005    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:13.447532    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:17.014251    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:19.231015    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:22.791035    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:24.970362    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:24.970583    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:24.970826    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:27.538082    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:27.539108    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:28.540044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:30.692065    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:33.315400    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:35.454723    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:48:35.454940    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:37.590850    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:37.591294    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:37.591378    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:40.152942    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:40.153017    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:40.158939    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:40.170076    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:40.170076    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:48:40.311850    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:48:40.311938    4712 buildroot.go:166] provisioning hostname "ha-136200"
	I0501 02:48:40.312011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:42.388241    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:44.941487    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:44.942306    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:44.948681    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:44.949642    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:44.949718    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200 && echo "ha-136200" | sudo tee /etc/hostname
	I0501 02:48:45.123416    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200
	
	I0501 02:48:45.123490    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:47.248892    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:49.920164    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:49.920164    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:49.920749    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:48:50.089597    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:48:50.089597    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:48:50.089597    4712 buildroot.go:174] setting up certificates
	I0501 02:48:50.090153    4712 provision.go:84] configureAuth start
	I0501 02:48:50.090240    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:54.811881    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:59.487351    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:59.487402    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:59.487402    4712 provision.go:143] copyHostCerts
	I0501 02:48:59.487402    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:48:59.487402    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:48:59.487402    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:48:59.488365    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:48:59.489448    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:48:59.489632    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:48:59.490981    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:48:59.491187    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:48:59.492726    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200 san=[127.0.0.1 172.28.217.218 ha-136200 localhost minikube]
	I0501 02:48:59.577887    4712 provision.go:177] copyRemoteCerts
	I0501 02:48:59.596375    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:48:59.597286    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:01.699540    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:04.259427    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:04.371852    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7744315s)
	I0501 02:49:04.371852    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:49:04.371852    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:49:04.422302    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:49:04.422602    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:49:04.478176    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:49:04.478176    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:49:04.532091    4712 provision.go:87] duration metric: took 14.4416362s to configureAuth
	I0501 02:49:04.532091    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:49:04.532690    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:49:04.532690    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:06.624197    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:09.238280    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:09.238979    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:09.245381    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:09.246060    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:09.246060    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:49:09.397759    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:49:09.397835    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:49:09.398290    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:49:09.398464    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:11.514582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:14.057033    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:14.057033    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:14.057589    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:49:14.242724    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:49:14.242724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:16.392749    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:18.993701    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:18.994302    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:19.000048    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:19.000537    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:19.000616    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:49:21.256124    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:49:21.256675    4712 machine.go:97] duration metric: took 45.8016127s to provisionDockerMachine
	I0501 02:49:21.256675    4712 client.go:171] duration metric: took 1m56.4098314s to LocalClient.Create
	I0501 02:49:21.256737    4712 start.go:167] duration metric: took 1m56.4098939s to libmachine.API.Create "ha-136200"
	I0501 02:49:21.256791    4712 start.go:293] postStartSetup for "ha-136200" (driver="hyperv")
	I0501 02:49:21.256828    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:49:21.271031    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:49:21.271031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:23.374454    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:23.374634    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:23.374716    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:25.919441    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:26.030251    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.759185s)
	I0501 02:49:26.044496    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:49:26.053026    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:49:26.053160    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:49:26.053600    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:49:26.054397    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:49:26.054397    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:49:26.070942    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:49:26.091568    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:49:26.143252    4712 start.go:296] duration metric: took 4.8863885s for postStartSetup
	I0501 02:49:26.147080    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:30.792456    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:49:30.796310    4712 start.go:128] duration metric: took 2m5.952044s to createHost
	I0501 02:49:30.796483    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:32.880619    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:35.468747    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:35.469470    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:35.469470    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531775.614259884
	
	I0501 02:49:35.611947    4712 fix.go:216] guest clock: 1714531775.614259884
	I0501 02:49:35.611947    4712 fix.go:229] Guest: 2024-05-01 02:49:35.614259884 +0000 UTC Remote: 2024-05-01 02:49:30.7963907 +0000 UTC m=+131.677772001 (delta=4.817869184s)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:40.253738    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:40.254896    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:40.261655    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:40.262498    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:40.262498    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531775
	I0501 02:49:40.415406    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:49:35 UTC 2024
	
	I0501 02:49:40.415406    4712 fix.go:236] clock set: Wed May  1 02:49:35 UTC 2024
	 (err=<nil>)
	I0501 02:49:40.415406    4712 start.go:83] releasing machines lock for "ha-136200", held for 2m15.5712031s
	I0501 02:49:40.416105    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:42.459145    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:45.033478    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:45.034063    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:45.038366    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:49:45.038515    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:45.050350    4712 ssh_runner.go:195] Run: cat /version.json
	I0501 02:49:45.050350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.230427    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:47.254252    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.254469    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.254558    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:49.922691    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.922867    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.923261    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.951021    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:50.022867    4712 ssh_runner.go:235] Completed: cat /version.json: (4.9724804s)
	I0501 02:49:50.037446    4712 ssh_runner.go:195] Run: systemctl --version
	I0501 02:49:50.123463    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0850592s)
	I0501 02:49:50.137756    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:49:50.147834    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:49:50.164262    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:49:50.197825    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:49:50.197877    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.197877    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:50.246918    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:49:50.281929    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:49:50.303725    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:49:50.317480    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:49:50.354607    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.392927    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:49:50.426684    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.464924    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:49:50.501540    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:49:50.541276    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:49:50.576278    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:49:50.614209    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:49:50.653144    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:49:50.688395    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:50.921067    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:49:50.960389    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.974435    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:49:51.020319    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.063731    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:49:51.113242    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.154151    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.196182    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:49:51.267621    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.297018    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:51.359019    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:49:51.382845    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:49:51.408532    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:49:51.459482    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:49:51.703156    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:49:51.928842    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:49:51.928842    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:49:51.985157    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:52.205484    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:49:54.768628    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5631253s)
	I0501 02:49:54.782717    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:49:54.821909    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:54.861989    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:49:55.097455    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:49:55.325878    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.547674    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:49:55.604800    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:55.648909    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.873886    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:49:55.987252    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:49:56.000254    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:49:56.009412    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:49:56.021925    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:49:56.041055    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:49:56.111426    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:49:56.124879    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.172644    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.210144    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:49:56.210144    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:49:56.231590    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:49:56.237056    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:49:56.273064    4712 kubeadm.go:877] updating cluster {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:49:56.273064    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:49:56.283976    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:49:56.305563    4712 docker.go:685] Got preloaded images: 
	I0501 02:49:56.305585    4712 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:49:56.319781    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:49:56.352980    4712 ssh_runner.go:195] Run: which lz4
	I0501 02:49:56.361434    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:49:56.376111    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:49:56.383203    4712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:49:56.383203    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:49:58.545920    4712 docker.go:649] duration metric: took 2.1838816s to copy over tarball
	I0501 02:49:58.559153    4712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:50:07.024882    4712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4656661s)
	I0501 02:50:07.024882    4712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:50:07.091273    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:50:07.117701    4712 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:50:07.169927    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:07.413870    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:50:10.777827    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.363932s)
	I0501 02:50:10.787955    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:50:10.813130    4712 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:50:10.813237    4712 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:50:10.813237    4712 kubeadm.go:928] updating node { 172.28.217.218 8443 v1.30.0 docker true true} ...
	I0501 02:50:10.813471    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.217.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:50:10.824528    4712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:50:10.865306    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:10.865306    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:10.865306    4712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:50:10.865306    4712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.217.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-136200 NodeName:ha-136200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.217.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.217.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:50:10.866013    4712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.217.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-136200"
	  kubeletExtraArgs:
	    node-ip: 172.28.217.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.217.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:50:10.866164    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:50:10.879856    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:50:10.916330    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:50:10.916590    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:50:10.930144    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:50:10.946847    4712 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:50:10.960617    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:50:10.980126    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 02:50:11.015010    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:50:11.046356    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:50:11.090122    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:50:11.151082    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:50:11.158193    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:50:11.198290    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:11.421704    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:50:11.457294    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.217.218
	I0501 02:50:11.457383    4712 certs.go:194] generating shared ca certs ...
	I0501 02:50:11.457383    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.458373    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:50:11.458865    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:50:11.459136    4712 certs.go:256] generating profile certs ...
	I0501 02:50:11.459821    4712 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:50:11.459950    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt with IP's: []
	I0501 02:50:11.600094    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt ...
	I0501 02:50:11.600094    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt: {Name:mkd5e4d205a603f84158daca3df4537a47f4507f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.601407    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key ...
	I0501 02:50:11.601407    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key: {Name:mk0f41aeab078751e43122e83e5a087fadc50acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.602800    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6
	I0501 02:50:11.602800    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.223.254]
	I0501 02:50:11.735634    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 ...
	I0501 02:50:11.735634    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6: {Name:mk25daf93f931731761fc26133f1d14b1615ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.736662    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 ...
	I0501 02:50:11.736662    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6: {Name:mk2e8ec633a20ca6bf6f004cdd1aa2dc02923e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.738036    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:50:11.750002    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:50:11.751999    4712 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:50:11.751999    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt with IP's: []
	I0501 02:50:11.858892    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt ...
	I0501 02:50:11.858892    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt: {Name:mk545c7bac57fe0475c68dabf35cf7726f7ba6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.860058    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key ...
	I0501 02:50:11.860058    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key: {Name:mk197c02f3ddea53477a395060c41fac8b486d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.861502    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:50:11.862042    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:50:11.862321    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:50:11.872340    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:50:11.872340    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:50:11.873220    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:50:11.874220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:50:11.875212    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:11.877013    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:50:11.928037    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:50:11.975033    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:50:12.024768    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:50:12.069813    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:50:12.117563    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:50:12.166940    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:50:12.214744    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:50:12.264780    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:50:12.314494    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:50:12.357210    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:50:12.407402    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:50:12.460345    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:50:12.486641    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:50:12.524534    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.531940    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.545887    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.569538    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:50:12.603111    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:50:12.640545    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.648489    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.664745    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.689236    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:50:12.722220    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:50:12.763152    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.771274    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.785811    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.809601    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:50:12.843815    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:50:12.851182    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:50:12.851596    4712 kubeadm.go:391] StartCluster: {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:50:12.861439    4712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:50:12.897822    4712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:50:12.930863    4712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:50:12.967142    4712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:50:12.989079    4712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:50:12.989174    4712 kubeadm.go:156] found existing configuration files:
	
	I0501 02:50:13.002144    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:50:13.022983    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:50:13.037263    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:50:13.070061    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:50:13.088170    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:50:13.104788    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:50:13.142331    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.161295    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:50:13.176372    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.217242    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:50:13.236623    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:50:13.250242    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:50:13.273719    4712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:50:13.796086    4712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:50:29.771938    4712 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:50:29.772562    4712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:50:29.772731    4712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:50:29.772731    4712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:50:29.775841    4712 out.go:204]   - Generating certificates and keys ...
	I0501 02:50:29.775841    4712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:50:29.776550    4712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:50:29.776704    4712 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:50:29.776918    4712 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:50:29.777081    4712 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:50:29.777841    4712 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.778067    4712 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:50:29.778150    4712 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:50:29.778250    4712 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:50:29.778341    4712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:50:29.778421    4712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:50:29.778724    4712 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:50:29.778804    4712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:50:29.778987    4712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:50:29.779083    4712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:50:29.779174    4712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:50:29.779418    4712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:50:29.781433    4712 out.go:204]   - Booting up control plane ...
	I0501 02:50:29.781433    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:50:29.781986    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:50:29.782154    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:50:29.782509    4712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:50:29.782778    4712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:50:29.782833    4712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:50:29.783188    4712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:50:29.783366    4712 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:50:29.783611    4712 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.012148578s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] The API server is healthy after 9.161500426s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:50:29.784343    4712 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:50:29.784449    4712 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:50:29.784907    4712 kubeadm.go:309] [mark-control-plane] Marking the node ha-136200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:50:29.785014    4712 kubeadm.go:309] [bootstrap-token] Using token: bebbcj.jj3pub0bsd9tcu95
	I0501 02:50:29.789897    4712 out.go:204]   - Configuring RBAC rules ...
	I0501 02:50:29.789897    4712 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:50:29.791324    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:50:29.791589    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:50:29.791711    4712 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:50:29.791958    4712 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.791958    4712 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:50:29.793244    4712 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:50:29.793244    4712 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:50:29.793868    4712 kubeadm.go:309] 
	I0501 02:50:29.794174    4712 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:50:29.794395    4712 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:50:29.794428    4712 kubeadm.go:309] 
	I0501 02:50:29.794531    4712 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--control-plane 
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.795522    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:50:29.795582    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:29.795642    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:29.798321    4712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:50:29.815739    4712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:50:29.823882    4712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:50:29.823882    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:50:29.880076    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:50:30.703674    4712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200 minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=true
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:30.736553    4712 ops.go:34] apiserver oom_adj: -16
	I0501 02:50:30.914646    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.422356    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.924569    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.422489    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.916374    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.419951    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.922300    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.426730    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.915815    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.415601    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.917473    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.419572    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.923752    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.424859    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.926096    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.415957    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.915894    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.417286    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.917110    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.418538    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.919363    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.420336    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.914423    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:42.068730    4712 kubeadm.go:1107] duration metric: took 11.364941s to wait for elevateKubeSystemPrivileges
	W0501 02:50:42.068870    4712 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:50:42.068934    4712 kubeadm.go:393] duration metric: took 29.2171223s to StartCluster
	I0501 02:50:42.069035    4712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.069065    4712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:42.070934    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.072021    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:50:42.072021    4712 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:42.072021    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:50:42.072021    4712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:50:42.072021    4712 addons.go:69] Setting storage-provisioner=true in profile "ha-136200"
	I0501 02:50:42.072578    4712 addons.go:234] Setting addon storage-provisioner=true in "ha-136200"
	I0501 02:50:42.072715    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:42.072765    4712 addons.go:69] Setting default-storageclass=true in profile "ha-136200"
	I0501 02:50:42.072820    4712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-136200"
	I0501 02:50:42.073003    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:42.073773    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.074332    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.237653    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:50:42.682536    4712 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.325924    4712 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:50:44.323327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.325924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.328653    4712 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:44.328653    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:50:44.328653    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:44.329300    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:44.330211    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:50:44.331266    4712 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:50:44.331692    4712 addons.go:234] Setting addon default-storageclass=true in "ha-136200"
	I0501 02:50:44.331692    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:44.332839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.573962    4712 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:46.573962    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:50:46.573962    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.693061    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.693131    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.693256    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:48.834701    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:49.381777    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:49.540602    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:51.475208    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:51.629340    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:51.811276    4712 round_trippers.go:463] GET https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:50:51.811902    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.811902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.811902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.826458    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:50:51.827458    4712 round_trippers.go:463] PUT https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:50:51.827458    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Content-Type: application/json
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.827458    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.831221    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:50:51.834740    4712 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:50:51.838052    4712 addons.go:505] duration metric: took 9.7659586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:50:51.838052    4712 start.go:245] waiting for cluster config update ...
	I0501 02:50:51.838052    4712 start.go:254] writing updated cluster config ...
	I0501 02:50:51.841165    4712 out.go:177] 
	I0501 02:50:51.854479    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:51.854479    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.861940    4712 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 02:50:51.865640    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:50:51.865640    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:50:51.865640    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:50:51.866174    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:50:51.866462    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.868358    4712 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:50:51.868358    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 02:50:51.869005    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:51.869070    4712 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 02:50:51.871919    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:50:51.872184    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:50:51.872184    4712 client.go:168] LocalClient.Create starting
	I0501 02:50:51.872730    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:53.846893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:57.208630    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:00.949038    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:51:01.496342    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: Creating VM...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:05.182621    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:51:05.182621    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:51:07.021151    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:51:07.022208    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:07.022208    4712 main.go:141] libmachine: Creating VHD
	I0501 02:51:07.022261    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:51:10.800515    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5C7D5B1-6A19-4B92-8073-0E024A878A77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:51:10.800843    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:51:10.813657    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:14.013713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -SizeBytes 20000MB
	I0501 02:51:16.613734    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:16.613973    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:16.614122    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m02 -DynamicMemoryEnabled $false
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:22.596839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m02 -Count 2
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\boot2docker.iso'
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:27.310044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd'
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: Starting VM...
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:35.347158    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:37.880551    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:37.881580    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:38.889792    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:41.091533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:43.621201    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:43.621813    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:44.622350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:50.423751    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:52.634051    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:56.229253    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:58.425395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:01.089224    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:03.247035    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:03.247253    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:03.247291    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:52:03.247449    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:05.493082    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:08.085777    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:08.101463    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:08.101463    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:52:08.244557    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:52:08.244557    4712 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 02:52:08.244557    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:10.396068    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:12.975111    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:12.975111    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:12.975111    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 02:52:13.142328    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 02:52:13.142479    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:17.993085    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:17.993267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:18.000242    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:18.000687    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:18.000687    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:52:18.164116    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:52:18.164116    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:52:18.164235    4712 buildroot.go:174] setting up certificates
	I0501 02:52:18.164235    4712 provision.go:84] configureAuth start
	I0501 02:52:18.164235    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:20.323803    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:20.324816    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:20.324954    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:25.037258    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:25.038231    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:25.038262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:27.637529    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:27.638462    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:27.638462    4712 provision.go:143] copyHostCerts
	I0501 02:52:27.638661    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:52:27.638979    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:52:27.639093    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:52:27.639613    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:52:27.640827    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:52:27.641053    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:52:27.642372    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:52:27.642643    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:52:27.642762    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:52:27.643264    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:52:27.644242    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.213.142 ha-136200-m02 localhost minikube]
	I0501 02:52:27.843189    4712 provision.go:177] copyRemoteCerts
	I0501 02:52:27.855361    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:52:27.855361    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:29.953607    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:32.549323    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:32.549356    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:32.549913    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:32.667202    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8118058s)
	I0501 02:52:32.667353    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:52:32.667882    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:52:32.721631    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:52:32.721631    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:52:32.771533    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:52:32.772177    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:52:32.825532    4712 provision.go:87] duration metric: took 14.6610374s to configureAuth
	I0501 02:52:32.825532    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:52:32.826094    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:52:32.826229    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:34.944371    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:37.500533    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:37.500590    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:37.506891    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:37.507395    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:37.507476    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:52:37.655757    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:52:37.655757    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:52:37.655757    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:52:37.656297    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:39.803012    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:42.365445    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:42.366335    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:42.372086    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:42.372086    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:42.372086    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:52:42.560633    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:52:42.560698    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:44.724351    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:47.350624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:47.350694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:47.356560    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:47.356887    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:47.356887    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:52:49.658910    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:52:49.658910    4712 machine.go:97] duration metric: took 46.4112065s to provisionDockerMachine
	I0501 02:52:49.659442    4712 client.go:171] duration metric: took 1m57.7858628s to LocalClient.Create
	I0501 02:52:49.659442    4712 start.go:167] duration metric: took 1m57.786395s to libmachine.API.Create "ha-136200"
	I0501 02:52:49.659503    4712 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 02:52:49.659537    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:52:49.675636    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:52:49.675636    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:51.837386    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:54.474409    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:54.475041    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:54.475353    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:54.588525    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9128536s)
	I0501 02:52:54.605879    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:52:54.614578    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:52:54.614578    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:52:54.615019    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:52:54.615983    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:52:54.616061    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:52:54.630716    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:52:54.652380    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:52:54.707179    4712 start.go:296] duration metric: took 5.0475618s for postStartSetup
	I0501 02:52:54.709671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:56.858662    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:59.468337    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:59.468783    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:59.468965    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:52:59.470910    4712 start.go:128] duration metric: took 2m7.6009059s to createHost
	I0501 02:52:59.471488    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:01.642528    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:04.224906    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:04.225471    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:04.225635    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:53:04.373720    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531984.377348123
	
	I0501 02:53:04.373720    4712 fix.go:216] guest clock: 1714531984.377348123
	I0501 02:53:04.373720    4712 fix.go:229] Guest: 2024-05-01 02:53:04.377348123 +0000 UTC Remote: 2024-05-01 02:52:59.4709109 +0000 UTC m=+340.350757801 (delta=4.906437223s)
	I0501 02:53:04.373851    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:06.540324    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:09.211685    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:09.212163    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:09.212163    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531984
	I0501 02:53:09.386381    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:53:04 UTC 2024
	
	I0501 02:53:09.386381    4712 fix.go:236] clock set: Wed May  1 02:53:04 UTC 2024
	 (err=<nil>)
	I0501 02:53:09.386381    4712 start.go:83] releasing machines lock for "ha-136200-m02", held for 2m17.5170158s
	I0501 02:53:09.386381    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:11.545938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:14.175393    4712 out.go:177] * Found network options:
	I0501 02:53:14.178428    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.181305    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.183961    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.186016    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:53:14.186987    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.190185    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:53:14.190185    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:14.201210    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:53:14.201210    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:16.404382    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:19.202467    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.202936    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.203019    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.238045    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.238494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.238494    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.303673    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1023631s)
	W0501 02:53:19.303730    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:53:19.322303    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:53:19.425813    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.234512s)
	I0501 02:53:19.425813    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:53:19.425869    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:19.426179    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:19.480110    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:53:19.516304    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:53:19.540429    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:53:19.554725    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:53:19.592793    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.638122    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:53:19.676636    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.716798    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:53:19.755079    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:53:19.792962    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:53:19.828507    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:53:19.864630    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:53:19.900003    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:53:19.933687    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:20.164043    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:53:20.200981    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:20.214486    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:53:20.252522    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.291404    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:53:20.342446    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.384719    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.433485    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:53:20.493558    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.521863    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:20.572266    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:53:20.592650    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:53:20.612894    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:53:20.662972    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:53:20.893661    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:53:21.103995    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:53:21.104092    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:53:21.153897    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:21.367769    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:53:23.926290    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5584356s)
	I0501 02:53:23.942886    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:53:23.985733    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.029327    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:53:24.262777    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:53:24.474412    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:24.701708    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:53:24.747995    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.789968    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:25.013627    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:53:25.132301    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:53:25.147412    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:53:25.161719    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:53:25.177972    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:53:25.198441    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:53:25.257309    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:53:25.270183    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.317675    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.366446    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:53:25.369267    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:53:25.371205    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:53:25.380319    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:53:25.380407    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:53:25.393209    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:53:25.400057    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:25.423648    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:53:25.424611    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:53:25.425544    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:27.528822    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:27.530295    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.213.142
	I0501 02:53:27.530371    4712 certs.go:194] generating shared ca certs ...
	I0501 02:53:27.530371    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.531276    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:53:27.531739    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:53:27.531846    4712 certs.go:256] generating profile certs ...
	I0501 02:53:27.532594    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:53:27.532748    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12
	I0501 02:53:27.532985    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.223.254]
	I0501 02:53:27.709722    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 ...
	I0501 02:53:27.709722    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12: {Name:mk2a82749362965014fb3e2d8d662f7a4e7e9cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.711740    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 ...
	I0501 02:53:27.711740    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12: {Name:mkb73c4ed44f1dd1b8f82d46a1302578ac6f367b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.712120    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:53:27.726267    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:53:27.727349    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:53:27.728383    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:53:27.728582    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:53:27.728825    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:53:27.729015    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:53:27.729253    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:53:27.729653    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:53:27.729899    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:53:27.730538    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:53:27.730538    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:53:27.730927    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:53:27.731437    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:53:27.731866    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:53:27.732310    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:53:27.732905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:27.733131    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:53:27.733384    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:53:27.733671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:29.906678    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:32.470407    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:32.580880    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:53:32.588963    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:53:32.624993    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:53:32.635801    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:53:32.670832    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:53:32.678812    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:53:32.713791    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:53:32.721308    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:53:32.760244    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:53:32.767565    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:53:32.804387    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:53:32.811905    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:53:32.832394    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:53:32.885891    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:53:32.936137    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:53:32.994824    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:53:33.054042    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:53:33.105998    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:53:33.156026    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:53:33.205426    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:53:33.264385    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:53:33.316776    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:53:33.368293    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:53:33.420965    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:53:33.458001    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:53:33.499072    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:53:33.534603    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:53:33.570373    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:53:33.602430    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:53:33.635495    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:53:33.684802    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:53:33.709070    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:53:33.743711    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.750970    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.765746    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.787709    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:53:33.828429    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:53:33.866546    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.874255    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.888580    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.910501    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:53:33.948720    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:53:33.993042    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.001989    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.015762    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.040058    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:53:34.077501    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:53:34.086036    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:53:34.086573    4712 kubeadm.go:928] updating node {m02 172.28.213.142 8443 v1.30.0 docker true true} ...
	I0501 02:53:34.086726    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.213.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:53:34.086726    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:53:34.101653    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:53:34.130866    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:53:34.131029    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:53:34.145238    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.165400    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:53:34.180369    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0501 02:53:35.468257    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.481254    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.488247    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:53:35.489247    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:53:35.546630    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.559624    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.626048    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:53:35.627145    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:53:36.028150    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:53:36.077312    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.090870    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.109939    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:53:36.111871    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:53:36.821139    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:53:36.843821    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:53:36.878070    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:53:36.917707    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:53:36.971960    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:53:36.979482    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:37.020702    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:37.250249    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:53:37.282989    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:37.299000    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:53:37.299000    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:53:37.299000    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:39.433070    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:42.012855    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:42.240815    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9416996s)
	I0501 02:53:42.240889    4712 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:53:42.240889    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443"
	I0501 02:54:27.807891    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443": (45.5666728s)
	I0501 02:54:27.808014    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:54:28.660694    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m02 minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:54:28.861404    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:54:29.035785    4712 start.go:318] duration metric: took 51.7364106s to joinCluster
	I0501 02:54:29.035979    4712 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:29.038999    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:54:29.036818    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:29.055991    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:54:29.482004    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:54:29.532870    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:54:29.534181    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:54:29.534386    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:54:29.535958    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:29.536236    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:29.536236    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:29.536236    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:29.536353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:29.609745    4712 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0501 02:54:30.045557    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.045557    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.045557    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.045557    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.051535    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:30.542020    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.542083    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.542148    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.542148    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.549047    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.050630    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.050694    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.050694    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.050694    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.063209    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:31.542025    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.542098    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.542098    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.542098    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.548667    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.549663    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:32.050097    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.050097    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.050174    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.050174    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.054568    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:32.542017    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.542017    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.542017    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.542017    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.546488    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:33.050866    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.050866    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.050866    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.050866    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.056451    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:33.538033    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.538033    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.538033    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.538033    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.713541    4712 round_trippers.go:574] Response Status: 200 OK in 175 milliseconds
	I0501 02:54:33.714615    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:34.041226    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.041226    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.041226    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.041226    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.047903    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:34.547454    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.547454    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.547757    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.547757    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.552099    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.046877    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.046877    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.046877    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.046877    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.052278    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:35.548463    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.548463    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.548740    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.548740    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.558660    4712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:54:35.560213    4712 node_ready.go:49] node "ha-136200-m02" has status "Ready":"True"
	I0501 02:54:35.560213    4712 node_ready.go:38] duration metric: took 6.0241453s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:35.560332    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:35.560422    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:35.560422    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.560422    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.560422    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.572050    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:54:35.581777    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.581924    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:54:35.581924    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.581924    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.581924    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.585770    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:35.587608    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.587685    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.587685    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.587685    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.591867    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.591867    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.591867    4712 pod_ready.go:81] duration metric: took 10.0903ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:54:35.591867    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.591867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.591867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.596249    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.597880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.597964    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.597964    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.597964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.602327    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.603007    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.603007    4712 pod_ready.go:81] duration metric: took 11.1397ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.603007    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.604166    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:54:35.604211    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.604211    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.604211    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.610508    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:35.611831    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.611831    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.611831    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.611831    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.627921    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:54:35.629498    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.629498    4712 pod_ready.go:81] duration metric: took 26.4906ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:35.629498    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.629498    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.629498    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.638393    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:35.638911    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.638911    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.638911    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.639550    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.643473    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:36.140037    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.140167    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.140167    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.140167    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.148123    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:36.149580    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.149580    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.149659    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.149659    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.155530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:36.644340    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.644340    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.644340    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.644340    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.651321    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:36.652588    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.653128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.653128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.653128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.660377    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.144534    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.144656    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.144656    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.144656    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.150598    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:37.152092    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.152665    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.152665    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.152665    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.160441    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.644104    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.644239    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.644239    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.644239    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.649836    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.650604    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.650671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.650671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.650671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.654947    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.656164    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:38.142505    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.142505    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.142505    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.142505    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.149100    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.151258    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.151347    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.151347    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.151347    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.155677    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:38.643186    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.643241    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.643241    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.643241    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.650578    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.651873    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.651902    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.651902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.651902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.655946    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.142681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.142681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.142681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.142681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.148315    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:39.149953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.150203    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.150203    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.150203    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.154771    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.643364    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.643364    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.643364    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.643364    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.649703    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:39.650947    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.650947    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.651009    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.651009    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.654949    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:39.656190    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:40.142428    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.142428    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.142676    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.142676    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.148562    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.149792    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.149792    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.149792    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.149792    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.154808    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.644095    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.644095    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.644095    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.644095    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.650441    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:40.651544    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.651598    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.651598    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.651598    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.662172    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:41.143094    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.143187    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.143187    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.143187    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.148870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.150018    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.150018    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.150018    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.150018    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.156810    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:41.640508    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.640624    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.640624    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.640624    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.646018    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.646730    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.647318    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.647318    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.647318    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.652880    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.139900    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.139985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.139985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.139985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.145577    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.146383    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.146383    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.146448    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.146448    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.151141    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.151862    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:42.639271    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.639271    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.639271    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.639271    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.642318    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.646671    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.646671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.646671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.646671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.651360    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.137151    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.137496    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.137496    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.137496    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.141750    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.142959    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.142959    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.142959    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.142959    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.147560    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.641950    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.641985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.641985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.641985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.647599    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.649299    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.649350    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.649350    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.649350    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.657034    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:43.658043    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.658043    4712 pod_ready.go:81] duration metric: took 8.0284866s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:54:43.658043    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.658043    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.658043    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.664394    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.664394    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.664394    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.668848    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.669848    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.669848    4712 pod_ready.go:81] duration metric: took 11.805ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:54:43.669848    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.669848    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.670830    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.674754    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:43.676699    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.676699    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.676699    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.676699    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.681632    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.683231    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.683231    4712 pod_ready.go:81] duration metric: took 13.3825ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683231    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:54:43.683412    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.683412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.683412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.688589    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.690255    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.690255    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.690325    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.690325    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.695853    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.696818    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.696860    4712 pod_ready.go:81] duration metric: took 13.6296ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696912    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696993    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:54:43.697029    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.697029    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.697029    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.701912    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.703032    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.703736    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.703736    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.703736    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.706383    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:54:43.707734    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.707824    4712 pod_ready.go:81] duration metric: took 10.9115ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.707824    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.845210    4712 request.go:629] Waited for 137.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.845681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.845681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.851000    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.047046    4712 request.go:629] Waited for 194.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.047200    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.047200    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.052548    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.053735    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.053735    4712 pod_ready.go:81] duration metric: took 345.9086ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.053735    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.250128    4712 request.go:629] Waited for 196.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.250128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.250128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.254761    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:44.456435    4712 request.go:629] Waited for 200.6839ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.456435    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.456435    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.461480    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.462518    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.462578    4712 pod_ready.go:81] duration metric: took 408.7057ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.462578    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.648779    4712 request.go:629] Waited for 185.8104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.648953    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.649128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.654457    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.855621    4712 request.go:629] Waited for 199.4812ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.855706    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.855706    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.861147    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.861147    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.861699    4712 pod_ready.go:81] duration metric: took 399.1179ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.861778    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.042766    4712 request.go:629] Waited for 180.9309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.042766    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.042766    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.047379    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:45.244553    4712 request.go:629] Waited for 197.0101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.244553    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.244553    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.250870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:45.252485    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:45.252485    4712 pod_ready.go:81] duration metric: took 390.7033ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.252547    4712 pod_ready.go:38] duration metric: took 9.6921442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:45.252619    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:54:45.266607    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:45.298538    4712 api_server.go:72] duration metric: took 16.2624407s to wait for apiserver process to appear ...
	I0501 02:54:45.298538    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:54:45.298642    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:54:45.308804    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:54:45.308804    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:54:45.308804    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.308804    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.308804    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.310764    4712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:54:45.311165    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:54:45.311238    4712 api_server.go:131] duration metric: took 12.7003ms to wait for apiserver health ...
	I0501 02:54:45.311238    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:54:45.446869    4712 request.go:629] Waited for 135.3903ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.446869    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.446869    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.455463    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:45.466055    4712 system_pods.go:59] 17 kube-system pods found
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.466055    4712 system_pods.go:74] duration metric: took 154.8157ms to wait for pod list to return data ...
	I0501 02:54:45.466055    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:54:45.650374    4712 request.go:629] Waited for 183.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.650566    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.650566    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.661544    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:45.662734    4712 default_sa.go:45] found service account: "default"
	I0501 02:54:45.662869    4712 default_sa.go:55] duration metric: took 196.812ms for default service account to be created ...
	I0501 02:54:45.662869    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:54:45.853192    4712 request.go:629] Waited for 189.9269ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.853419    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.853419    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.865601    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:45.872777    4712 system_pods.go:86] 17 kube-system pods found
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.873383    4712 system_pods.go:126] duration metric: took 210.5126ms to wait for k8s-apps to be running ...
	I0501 02:54:45.873383    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:54:45.886040    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:45.914966    4712 system_svc.go:56] duration metric: took 41.5829ms WaitForService to wait for kubelet
	I0501 02:54:45.915054    4712 kubeadm.go:576] duration metric: took 16.8789526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:54:45.915054    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:54:46.043164    4712 request.go:629] Waited for 127.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:46.043164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:46.043310    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:46.050320    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:105] duration metric: took 136.4457ms to run NodePressure ...
	I0501 02:54:46.051501    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:54:46.051501    4712 start.go:254] writing updated cluster config ...
	I0501 02:54:46.055981    4712 out.go:177] 
	I0501 02:54:46.073210    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:46.073681    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.079155    4712 out.go:177] * Starting "ha-136200-m03" control-plane node in "ha-136200" cluster
	I0501 02:54:46.082550    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:54:46.082550    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:54:46.083028    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:54:46.083223    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:54:46.083384    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.091748    4712 start.go:360] acquireMachinesLock for ha-136200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:54:46.091748    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m03"
	I0501 02:54:46.091748    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:46.091748    4712 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0501 02:54:46.099863    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:54:46.100178    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:54:46.100178    4712 client.go:168] LocalClient.Create starting
	I0501 02:54:46.100178    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101128    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:54:49.970242    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:51.553966    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:55.358013    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:54:55.879042    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: Creating VM...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:58.933270    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:54:58.933728    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:55:00.789675    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:00.789938    4712 main.go:141] libmachine: Creating VHD
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:55:04.583967    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AAB86B48-3D75-4842-8FF8-3BDEC4AB86C2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:55:04.584134    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:55:04.594277    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -SizeBytes 20000MB
	I0501 02:55:10.391210    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:10.391245    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:10.391352    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:14.151882    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m03 -DynamicMemoryEnabled $false
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:16.478022    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m03 -Count 2
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:18.717850    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\boot2docker.iso'
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd'
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:24.025533    4712 main.go:141] libmachine: Starting VM...
	I0501 02:55:24.025533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m03
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:27.131722    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:55:27.131722    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:29.452098    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:29.453035    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:29.453089    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:32.025441    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:32.026234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:33.036612    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:37.849230    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:37.849355    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:38.854379    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:44.621333    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:49.475402    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:49.476316    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:50.480573    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:52.724713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:55.379189    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:57.536246    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:55:57.536246    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:59.681292    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:59.681842    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:59.682021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:02.302435    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:02.303031    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:02.303031    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:56:02.440858    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:56:02.440919    4712 buildroot.go:166] provisioning hostname "ha-136200-m03"
	I0501 02:56:02.440919    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:04.541126    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:07.118513    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:07.119097    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:07.119097    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m03 && echo "ha-136200-m03" | sudo tee /etc/hostname
	I0501 02:56:07.274395    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m03
	
	I0501 02:56:07.274395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:09.427222    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:12.066151    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:12.066558    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:12.072701    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:12.073263    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:12.073263    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:56:12.226572    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:56:12.226572    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:56:12.226572    4712 buildroot.go:174] setting up certificates
	I0501 02:56:12.226572    4712 provision.go:84] configureAuth start
	I0501 02:56:12.226572    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:14.383697    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:14.383832    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:14.383916    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:17.017056    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:19.246383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:21.887343    4712 provision.go:143] copyHostCerts
	I0501 02:56:21.887688    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:56:21.887688    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:56:21.887688    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:56:21.888470    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:56:21.889606    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:56:21.890069    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:56:21.890132    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:56:21.890553    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:56:21.891611    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:56:21.891800    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:56:21.891800    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:56:21.892337    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:56:21.893162    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m03 san=[127.0.0.1 172.28.216.62 ha-136200-m03 localhost minikube]
	I0501 02:56:21.973101    4712 provision.go:177] copyRemoteCerts
	I0501 02:56:21.993116    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:56:21.993116    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:24.170031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:26.830749    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:26.831099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:26.831162    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:26.935784    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9426327s)
	I0501 02:56:26.935784    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:56:26.936266    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:56:26.985792    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:56:26.986191    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:56:27.035460    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:56:27.036450    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:56:27.092775    4712 provision.go:87] duration metric: took 14.8660953s to configureAuth
	I0501 02:56:27.092775    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:56:27.093873    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:56:27.094011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:29.214442    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:31.848020    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:31.848124    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:31.859047    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:31.859047    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:31.859047    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:56:31.983811    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:56:31.983936    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:56:31.984160    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:56:31.984160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:34.146837    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:36.793925    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:36.794747    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:36.801153    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:36.801782    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:36.801782    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	Environment="NO_PROXY=172.28.217.218,172.28.213.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:56:36.960579    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	Environment=NO_PROXY=172.28.217.218,172.28.213.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:56:36.960579    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:39.141800    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:41.765565    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:41.766216    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:41.774239    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:41.774411    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:41.774411    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:56:43.994230    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:56:43.994313    4712 machine.go:97] duration metric: took 46.4577313s to provisionDockerMachine
	I0501 02:56:43.994313    4712 client.go:171] duration metric: took 1m57.8932783s to LocalClient.Create
	I0501 02:56:43.994313    4712 start.go:167] duration metric: took 1m57.8932783s to libmachine.API.Create "ha-136200"
	I0501 02:56:43.994428    4712 start.go:293] postStartSetup for "ha-136200-m03" (driver="hyperv")
	I0501 02:56:43.994473    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:56:44.010383    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:56:44.010383    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:46.225048    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:46.225772    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:46.225844    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:48.919679    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:49.032380    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0219067s)
	I0501 02:56:49.045700    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:56:49.054180    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:56:49.054180    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:56:49.054700    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:56:49.055002    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:56:49.055574    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:56:49.071048    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:56:49.092423    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:56:49.143151    4712 start.go:296] duration metric: took 5.1486851s for postStartSetup
	I0501 02:56:49.146034    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:51.349851    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:51.350067    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:51.350153    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:54.016657    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:54.017149    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:54.017380    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:56:54.019460    4712 start.go:128] duration metric: took 2m7.9267809s to createHost
	I0501 02:56:54.019460    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:58.818618    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:58.819348    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:58.819348    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:56:58.949732    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532218.937413126
	
	I0501 02:56:58.949732    4712 fix.go:216] guest clock: 1714532218.937413126
	I0501 02:56:58.949732    4712 fix.go:229] Guest: 2024-05-01 02:56:58.937413126 +0000 UTC Remote: 2024-05-01 02:56:54.0194605 +0000 UTC m=+574.897601601 (delta=4.917952626s)
	I0501 02:56:58.949941    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:01.096436    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:03.657161    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:57:03.657803    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:57:03.657803    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532218
	I0501 02:57:03.807080    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:56:58 UTC 2024
	
	I0501 02:57:03.807174    4712 fix.go:236] clock set: Wed May  1 02:56:58 UTC 2024
	 (err=<nil>)
	I0501 02:57:03.807174    4712 start.go:83] releasing machines lock for "ha-136200-m03", held for 2m17.7144231s
	I0501 02:57:03.807438    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:08.605250    4712 out.go:177] * Found network options:
	I0501 02:57:08.607292    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.612307    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.619160    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:57:08.619160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:08.631565    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:57:08.631565    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:10.838360    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838735    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.624235    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.648439    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.648490    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.648768    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.732596    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1009937s)
	W0501 02:57:13.732596    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:57:13.748662    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:57:13.811529    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:57:13.811529    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:13.811529    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1923313s)
	I0501 02:57:13.812665    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:13.867675    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:57:13.906069    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:57:13.929632    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:57:13.947027    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:57:13.986248    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.024920    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:57:14.061978    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.099821    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:57:14.138543    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:57:14.181270    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:57:14.217808    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:57:14.261794    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:57:14.297051    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:57:14.332222    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:14.558529    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:57:14.595594    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:14.610122    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:57:14.650440    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.689246    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:57:14.740013    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.780524    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.822987    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:57:14.889904    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.919061    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:14.983590    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:57:15.008856    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:57:15.032703    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:57:15.086991    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:57:15.324922    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:57:15.542551    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:57:15.542551    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:57:15.594658    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:15.826063    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:57:18.399291    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5732092s)
	I0501 02:57:18.412657    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:57:18.452282    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:18.491033    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:57:18.702768    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:57:18.928695    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.145438    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:57:19.199070    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:19.242280    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.475811    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:57:19.598548    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:57:19.612590    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:57:19.624279    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:57:19.637235    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:57:19.657683    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:57:19.721351    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:57:19.734095    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.784976    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.822576    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:57:19.826041    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:57:19.827741    4712 out.go:177]   - env NO_PROXY=172.28.217.218,172.28.213.142
	I0501 02:57:19.831635    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:57:19.851676    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:57:19.858242    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:19.883254    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:57:19.883656    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:57:19.884140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:22.018331    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:22.018592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:22.018658    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:22.019393    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.216.62
	I0501 02:57:22.019393    4712 certs.go:194] generating shared ca certs ...
	I0501 02:57:22.019393    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.020318    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:57:22.020786    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:57:22.021028    4712 certs.go:256] generating profile certs ...
	I0501 02:57:22.021028    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:57:22.021606    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9
	I0501 02:57:22.021767    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.216.62 172.28.223.254]
	I0501 02:57:22.149544    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 ...
	I0501 02:57:22.149544    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9: {Name:mk4837fbdb29e34df2c44991c600cda784a93d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.150373    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 ...
	I0501 02:57:22.150373    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9: {Name:mkcff5432d26e17c25cf2a9709eb4553a31e99c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.152472    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:57:22.165924    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:57:22.166444    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:57:22.166444    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:57:22.167623    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:57:22.168122    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:57:22.168280    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:57:22.169490    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:57:22.169490    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:57:22.170595    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:57:22.170869    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:57:22.171164    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:57:22.171434    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:57:22.171670    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:57:22.172911    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:24.374904    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:26.980450    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:27.093857    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:57:27.102183    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:57:27.141690    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:57:27.150194    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:57:27.193806    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:57:27.202957    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:57:27.254044    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:57:27.262605    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:57:27.303214    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:57:27.310453    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:57:27.348966    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:57:27.356382    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:57:27.383468    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:57:27.437872    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:57:27.494095    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:57:27.544977    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:57:27.599083    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:57:27.652123    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:57:27.710792    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:57:27.766379    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:57:27.817284    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:57:27.867949    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:57:27.930560    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:57:27.987875    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:57:28.025174    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:57:28.061492    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:57:28.099323    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:57:28.133169    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:57:28.168585    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:57:28.223450    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:57:28.292690    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:57:28.315882    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:57:28.353000    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.365096    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.379858    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.406814    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:57:28.445706    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:57:28.482484    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.491120    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.507367    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.535421    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:57:28.574647    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:57:28.616757    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.624484    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.642485    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.665148    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:57:28.706630    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:57:28.714508    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:57:28.714998    4712 kubeadm.go:928] updating node {m03 172.28.216.62 8443 v1.30.0 docker true true} ...
	I0501 02:57:28.715189    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.216.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:57:28.715218    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:57:28.727524    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:57:28.767475    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:57:28.767631    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:57:28.783398    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.801741    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:57:28.815792    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:57:28.838101    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:57:28.838226    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:57:28.838396    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.855124    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:57:28.856182    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.858128    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.881905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.881905    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:57:28.882027    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:57:28.882165    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:57:28.882277    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:57:28.898781    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.959439    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:57:28.959688    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:57:30.251192    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:57:30.272192    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:57:30.311119    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:57:30.353248    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:57:30.407414    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:57:30.415360    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:30.454450    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:30.696464    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:57:30.737201    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:30.801844    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:57:30.802126    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:57:30.802234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:32.962279    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:35.601356    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:35.838006    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0358438s)
	I0501 02:57:35.838006    4712 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:57:35.838006    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443"
	I0501 02:58:21.819619    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443": (45.981274s)
	I0501 02:58:21.819619    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:58:22.593318    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m03 minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:58:22.788566    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:58:22.987611    4712 start.go:318] duration metric: took 52.1853822s to joinCluster
	I0501 02:58:22.987895    4712 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:58:23.012496    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:58:22.988142    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:58:23.031751    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:58:23.569395    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:58:23.619961    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:58:23.620228    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:58:23.620770    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:58:23.621670    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:23.621910    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:23.621910    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:23.621982    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:23.621982    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:23.637444    4712 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:58:24.133658    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.133658    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.133658    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.133658    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.139465    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:24.622867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.622867    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.622867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.622867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.629524    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.129429    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.129429    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.129510    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.129510    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.135754    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.633954    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.633954    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.633954    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.633954    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.638650    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:25.639656    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:26.123894    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.123894    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.123894    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.123894    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.129103    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:26.629161    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.629161    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.629161    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.629161    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.648167    4712 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 02:58:27.136028    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.136028    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.136028    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.136028    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.326021    4712 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0501 02:58:27.623480    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.623600    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.623600    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.623600    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.629035    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:28.136433    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.136433    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.136626    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.136626    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.203923    4712 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0501 02:58:28.205553    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:28.636021    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.636185    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.636185    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.636185    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.646735    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:29.122451    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.122515    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.122515    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.122515    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.140826    4712 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:58:29.629756    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.629756    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.629756    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.629756    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.637588    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:30.132174    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.132269    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.132269    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.132269    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.136921    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:30.632939    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.633022    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.633022    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.633022    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.638815    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:30.640044    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:31.133378    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.133378    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.133378    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.133378    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.138754    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:31.633444    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.633511    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.633511    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.633511    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.639686    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:32.131317    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.131317    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.131317    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.131317    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.136414    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:32.629649    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.629649    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.629649    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.629649    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.634980    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:33.129878    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.129878    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.129878    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.129878    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.155125    4712 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0501 02:58:33.156557    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:33.629865    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.630060    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.630060    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.630060    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.636368    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:34.128412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.128412    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.128412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.128412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.133022    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:34.629333    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.629333    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.629333    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.629333    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.635358    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.129272    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.129376    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.129376    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.129376    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.136662    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.137446    4712 node_ready.go:49] node "ha-136200-m03" has status "Ready":"True"
	I0501 02:58:35.137492    4712 node_ready.go:38] duration metric: took 11.5157372s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:35.137492    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:35.137635    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:35.137635    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.137635    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.137635    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.149133    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:58:35.158917    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.159445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:58:35.159565    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.159565    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.159651    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.170650    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:35.172026    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.172026    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.172026    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.172026    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:58:35.180770    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.180770    4712 pod_ready.go:81] duration metric: took 21.3241ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:58:35.180770    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.180770    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.185805    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:35.187056    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.187056    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.187056    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.187056    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.191361    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.192405    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.192405    4712 pod_ready.go:81] duration metric: took 11.6358ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:58:35.192405    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.192405    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.192405    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.196117    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.197312    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.197312    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.197389    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.197389    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.201195    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.201924    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.201924    4712 pod_ready.go:81] duration metric: took 9.5188ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.201924    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.202054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:58:35.202195    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.202195    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.202195    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.208450    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.209323    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:35.209323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.209323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.209323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.212935    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.214190    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.214190    4712 pod_ready.go:81] duration metric: took 12.2652ms for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.214190    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.330301    4712 request.go:629] Waited for 115.8713ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.330574    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.330574    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.338021    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.534070    4712 request.go:629] Waited for 194.5208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.534353    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.534353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.540932    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.541927    4712 pod_ready.go:92] pod "etcd-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.541927    4712 pod_ready.go:81] duration metric: took 327.673ms for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.541927    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.737879    4712 request.go:629] Waited for 195.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.738683    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.738683    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.743861    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.940254    4712 request.go:629] Waited for 195.0246ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.940254    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.940254    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.943091    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:35.949355    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.949355    4712 pod_ready.go:81] duration metric: took 407.425ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.949355    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.143537    4712 request.go:629] Waited for 193.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143801    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143835    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.143835    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.143835    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.149992    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.331653    4712 request.go:629] Waited for 180.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.331653    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.331653    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.337290    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.338458    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.338521    4712 pod_ready.go:81] duration metric: took 389.1629ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.338521    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.533514    4712 request.go:629] Waited for 194.8709ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.533967    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.534181    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.534181    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.534181    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.548236    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:58:36.737561    4712 request.go:629] Waited for 188.1304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737864    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737942    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.737942    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.738002    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.742410    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:36.743400    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.743400    4712 pod_ready.go:81] duration metric: took 404.8131ms for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.743400    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.942223    4712 request.go:629] Waited for 198.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.942445    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.942445    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.947749    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.131974    4712 request.go:629] Waited for 183.3149ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132232    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.132323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.132323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.137476    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.138446    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.138446    4712 pod_ready.go:81] duration metric: took 395.044ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.138446    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.333778    4712 request.go:629] Waited for 195.2258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.334044    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.334044    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.338704    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:37.538179    4712 request.go:629] Waited for 197.0874ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538437    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538500    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.538500    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.538500    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.544773    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:37.544773    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.544773    4712 pod_ready.go:81] duration metric: took 406.3235ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.544773    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.743876    4712 request.go:629] Waited for 199.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.744106    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.744106    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.749628    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.931954    4712 request.go:629] Waited for 180.0772ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932132    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.932132    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.932132    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.937302    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.937875    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.937875    4712 pod_ready.go:81] duration metric: took 393.0991ms for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.937875    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.134928    4712 request.go:629] Waited for 196.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.134928    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.135164    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.135164    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.135164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.151320    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:58:38.340422    4712 request.go:629] Waited for 186.7144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.340523    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.340523    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.344835    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:38.346933    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.347124    4712 pod_ready.go:81] duration metric: took 409.2461ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.347124    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.529397    4712 request.go:629] Waited for 182.0139ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529776    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.529776    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.529776    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.535530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.733704    4712 request.go:629] Waited for 197.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.733854    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.733854    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.739456    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.741035    4712 pod_ready.go:92] pod "kube-proxy-9ml9x" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.741035    4712 pod_ready.go:81] duration metric: took 393.9082ms for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.741141    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.936294    4712 request.go:629] Waited for 194.9804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.936492    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.936492    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.941904    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.139076    4712 request.go:629] Waited for 195.5675ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.139516    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.139590    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.146156    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:39.146839    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.147389    4712 pod_ready.go:81] duration metric: took 406.2452ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.147389    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.331771    4712 request.go:629] Waited for 183.3466ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.331839    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.331839    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.338962    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:39.529638    4712 request.go:629] Waited for 189.8551ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.529880    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.529880    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.535423    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.536281    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.536496    4712 pod_ready.go:81] duration metric: took 389.1041ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.536496    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.733532    4712 request.go:629] Waited for 196.8225ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733532    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733755    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.733755    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.733755    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.738768    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.936556    4712 request.go:629] Waited for 196.8464ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.936957    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.937066    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.942275    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.942447    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.943009    4712 pod_ready.go:81] duration metric: took 406.5101ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.943009    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.137743    4712 request.go:629] Waited for 194.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.138045    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.138045    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.143795    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.340161    4712 request.go:629] Waited for 194.6485ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.340368    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.340368    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.346127    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.347243    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:40.347243    4712 pod_ready.go:81] duration metric: took 404.2307ms for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.347243    4712 pod_ready.go:38] duration metric: took 5.2097122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:40.347243    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:58:40.361809    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:58:40.399669    4712 api_server.go:72] duration metric: took 17.4115847s to wait for apiserver process to appear ...
	I0501 02:58:40.399766    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:58:40.399822    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:58:40.410080    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:58:40.410375    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:58:40.410503    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.410503    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.410503    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.412638    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:40.413725    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:58:40.413725    4712 api_server.go:131] duration metric: took 13.9591ms to wait for apiserver health ...
	I0501 02:58:40.413725    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:58:40.543767    4712 request.go:629] Waited for 129.9651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.543975    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.543975    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.554206    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.565423    4712 system_pods.go:59] 24 kube-system pods found
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.566039    4712 system_pods.go:74] duration metric: took 152.3128ms to wait for pod list to return data ...
	I0501 02:58:40.566039    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:58:40.731110    4712 request.go:629] Waited for 164.8435ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.731110    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.731110    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.736937    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.737529    4712 default_sa.go:45] found service account: "default"
	I0501 02:58:40.737568    4712 default_sa.go:55] duration metric: took 171.5277ms for default service account to be created ...
	I0501 02:58:40.737568    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:58:40.936328    4712 request.go:629] Waited for 198.4062ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.936390    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.936390    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.946796    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.961809    4712 system_pods.go:86] 24 kube-system pods found
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.962497    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.962521    4712 system_pods.go:126] duration metric: took 224.9515ms to wait for k8s-apps to be running ...
	I0501 02:58:40.962521    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:58:40.975583    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:58:41.007354    4712 system_svc.go:56] duration metric: took 44.7618ms WaitForService to wait for kubelet
	I0501 02:58:41.007354    4712 kubeadm.go:576] duration metric: took 18.0193266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:58:41.007354    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:58:41.140806    4712 request.go:629] Waited for 133.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140922    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140964    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:41.140964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:41.141046    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:41.151428    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:41.153995    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154053    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154053    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:105] duration metric: took 146.7575ms to run NodePressure ...
	I0501 02:58:41.154113    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:58:41.154113    4712 start.go:254] writing updated cluster config ...
	I0501 02:58:41.168562    4712 ssh_runner.go:195] Run: rm -f paused
	I0501 02:58:41.321592    4712 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:58:41.326673    4712 out.go:177] * Done! kubectl is now configured to use "ha-136200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 02:50:57 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:50:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cadf8314e6ab75357a18a8ca1a8af0de84469ae938750a06f758dc7a9ac32724/resolv.conf as [nameserver 172.28.208.1]"
	May 01 02:50:57 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:50:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aaa3d1f50041ef1496a348ececd021d6aadfb835f922936a6fabf67c8fb30a63/resolv.conf as [nameserver 172.28.208.1]"
	May 01 02:50:57 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:50:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/54bbf0662d42266237503b0f14eb96eacbef901466d583e51ac92d22d06d20dd/resolv.conf as [nameserver 172.28.208.1]"
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.461175190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.461265190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.461283790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.461559092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.481999103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.482589007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.482784408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.483246110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.676182761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.679018677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.679207678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.679887882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812342061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812581962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812601063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.813284867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:20 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:59:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c61d49828a30cad795117fa540b839a76d74dc6aaa64f0cc1a3a17e5ca07eff2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 01 02:59:21 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:59:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649291489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649563690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649688091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649852992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bb23816e7b6b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   c61d49828a30c       busybox-fc5497c4f-6mlkh
	229343dc7dba5       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   54bbf0662d422       coredns-7db6d8ff4d-rm4gm
	247f815bf0531       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   aaa3d1f50041e       storage-provisioner
	822aaf8c270e3       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   cadf8314e6ab7       coredns-7db6d8ff4d-2j8mj
	c09511b7df643       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              9 minutes ago        Running             kindnet-cni               0                   bdd01e6cca1ed       kindnet-sj2rc
	562cd55ab9702       a0bf559e280cf                                                                                         9 minutes ago        Running             kube-proxy                0                   579e0dba427c2       kube-proxy-8f67k
	1c063bfe224cd       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     10 minutes ago       Running             kube-vip                  0                   7f28f99b3c8a8       kube-vip-ha-136200
	b6454ceb34cad       259c8277fcbbc                                                                                         10 minutes ago       Running             kube-scheduler            0                   e6cf1f3e651b3       kube-scheduler-ha-136200
	8ff4bf0570939       c42f13656d0b2                                                                                         10 minutes ago       Running             kube-apiserver            0                   2455e947d4906       kube-apiserver-ha-136200
	8fa3aa565b366       c7aad43836fa5                                                                                         10 minutes ago       Running             kube-controller-manager   0                   c7e42fd34e247       kube-controller-manager-ha-136200
	8b0d01885db55       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   da46759fd8e15       etcd-ha-136200
	
	
	==> coredns [229343dc7dba] <==
	[INFO] 10.244.1.2:38893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.138771945s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000276501s
	[INFO] 10.244.1.2:46275 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000672s
	[INFO] 10.244.2.2:34687 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040099987s
	[INFO] 10.244.2.2:56378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000284202s
	[INFO] 10.244.2.2:56092 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000345802s
	[INFO] 10.244.2.2:52745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000349302s
	[INFO] 10.244.2.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095201s
	[INFO] 10.244.0.4:51567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000267102s
	[INFO] 10.244.0.4:33148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178701s
	[INFO] 10.244.1.2:43398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000089301s
	[INFO] 10.244.1.2:52211 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001122s
	[INFO] 10.244.1.2:35470 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013228661s
	[INFO] 10.244.1.2:40781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174701s
	[INFO] 10.244.1.2:45257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000274201s
	[INFO] 10.244.1.2:36114 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165601s
	[INFO] 10.244.2.2:56600 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000371102s
	[INFO] 10.244.2.2:39742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000250502s
	[INFO] 10.244.0.4:45933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116901s
	[INFO] 10.244.0.4:53681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082001s
	[INFO] 10.244.2.2:38830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232701s
	[INFO] 10.244.0.4:51196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001489507s
	[INFO] 10.244.0.4:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264301s
	[INFO] 10.244.0.4:43314 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.013461063s
	[INFO] 10.244.1.2:41778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092301s
	
	
	==> coredns [822aaf8c270e] <==
	[INFO] 10.244.2.2:41813 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217501s
	[INFO] 10.244.2.2:54888 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032885853s
	[INFO] 10.244.0.4:49712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126101s
	[INFO] 10.244.0.4:55974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012564658s
	[INFO] 10.244.0.4:45253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139901s
	[INFO] 10.244.0.4:60045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001515s
	[INFO] 10.244.0.4:39879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175501s
	[INFO] 10.244.0.4:42089 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000310501s
	[INFO] 10.244.1.2:53821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111101s
	[INFO] 10.244.1.2:42651 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116201s
	[INFO] 10.244.2.2:34505 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.2.2:54873 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001606s
	[INFO] 10.244.0.4:60573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001105s
	[INFO] 10.244.0.4:37086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000727s
	[INFO] 10.244.1.2:52370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123901s
	[INFO] 10.244.1.2:35190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277501s
	[INFO] 10.244.1.2:42611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158301s
	[INFO] 10.244.1.2:36993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000280201s
	[INFO] 10.244.2.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206701s
	[INFO] 10.244.2.2:37229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092101s
	[INFO] 10.244.2.2:56027 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001251s
	[INFO] 10.244.0.4:55246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211601s
	[INFO] 10.244.1.2:57784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270801s
	[INFO] 10.244.1.2:39482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001183s
	[INFO] 10.244.1.2:53277 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078801s
	
	
	==> describe nodes <==
	Name:               ha-136200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:00:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:59:31 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:59:31 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:59:31 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:59:31 +0000   Wed, 01 May 2024 02:50:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.217.218
	  Hostname:    ha-136200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd5a02b3729c454c81fac1ddb77470ea
	  System UUID:                feb48805-7018-ee45-9dd1-70d50cb8dabe
	  Boot ID:                    f931e3ee-8c2d-4859-8d97-8671a4247530
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6mlkh              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-2j8mj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m43s
	  kube-system                 coredns-7db6d8ff4d-rm4gm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m43s
	  kube-system                 etcd-ha-136200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m58s
	  kube-system                 kindnet-sj2rc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m43s
	  kube-system                 kube-apiserver-ha-136200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-controller-manager-ha-136200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-proxy-8f67k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	  kube-system                 kube-scheduler-ha-136200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 kube-vip-ha-136200                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m56s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m41s              kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m56s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m56s              kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s              kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s              kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s              kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m44s              node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  NodeReady                9m29s              kubelet          Node ha-136200 status is now: NodeReady
	  Normal  RegisteredNode           5m41s              node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  RegisteredNode           108s               node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	
	
	Name:               ha-136200-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:54:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:00:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:59:28 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:59:28 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:59:28 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:59:28 +0000   Wed, 01 May 2024 02:54:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.213.142
	  Hostname:    ha-136200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b20b8a63378b4be990a267d65bc5017b
	  System UUID:                f54ef658-ded9-8245-9d86-fec94474eff5
	  Boot ID:                    b6a9b4fd-1abd-4ef4-a3a8-bc0c39ab4624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pc6wt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-136200-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m2s
	  kube-system                 kindnet-kb2x4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-apiserver-ha-136200-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-controller-manager-ha-136200-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-proxy-zj5jv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-scheduler-ha-136200-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-vip-ha-136200-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m56s                kube-proxy       
	  Normal  RegisteredNode           6m4s                 node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m4s)  kubelet          Node ha-136200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x7 over 6m4s)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m41s                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  RegisteredNode           108s                 node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	
	
	Name:               ha-136200-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:00:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.216.62
	  Hostname:    ha-136200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352997c1e27d48bb8dff5ae5f17e228a
	  System UUID:                0e4a669f-6d5f-be47-a143-5d2db1558741
	  Boot ID:                    8ce378d2-4a7e-40de-aab0-8bc599c3d157
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2gr4g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-136200-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m9s
	  kube-system                 kindnet-rlfkk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m11s
	  kube-system                 kube-apiserver-ha-136200-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-controller-manager-ha-136200-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-proxy-9ml9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                 kube-scheduler-ha-136200-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 kube-vip-ha-136200-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node ha-136200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           108s                   node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	
	
	==> dmesg <==
	[  +1.922879] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.445343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 02:49] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.218573] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.318095] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.121878] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.646066] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.241331] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.276456] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.872310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.245693] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.234055] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.318386] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[May 1 02:50] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.117675] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.894847] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.744854] systemd-fstab-generator[1728]: Ignoring "noauto" option for root device
	[  +0.118239] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.246999] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.464074] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[ +14.473066] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.151247] kauditd_printk_skb: 29 callbacks suppressed
	[May 1 02:54] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [8b0d01885db5] <==
	{"level":"info","ts":"2024-05-01T02:58:18.941185Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d5cb0dbd3e937195","remote-peer-id":"477eb305d8136a0f"}
	{"level":"info","ts":"2024-05-01T02:58:18.949213Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d5cb0dbd3e937195","remote-peer-id":"477eb305d8136a0f"}
	{"level":"warn","ts":"2024-05-01T02:58:19.552656Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"477eb305d8136a0f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-05-01T02:58:20.552285Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"477eb305d8136a0f","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-05-01T02:58:21.563903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5cb0dbd3e937195 switched to configuration voters=(5151751861439785487 15405422056800743829 16720541665161568577)"}
	{"level":"info","ts":"2024-05-01T02:58:21.564037Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"d92207d17d517cdc","local-member-id":"d5cb0dbd3e937195"}
	{"level":"info","ts":"2024-05-01T02:58:21.564065Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"d5cb0dbd3e937195","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"477eb305d8136a0f"}
	{"level":"warn","ts":"2024-05-01T02:58:27.32276Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e80b4c0e2412e141","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"53.82673ms"}
	{"level":"warn","ts":"2024-05-01T02:58:27.322905Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"477eb305d8136a0f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"53.975031ms"}
	{"level":"info","ts":"2024-05-01T02:58:27.32416Z","caller":"traceutil/trace.go:171","msg":"trace[1054755025] linearizableReadLoop","detail":"{readStateIndex:1749; appliedIndex:1750; }","duration":"179.427394ms","start":"2024-05-01T02:58:27.144718Z","end":"2024-05-01T02:58:27.324146Z","steps":["trace[1054755025] 'read index received'  (duration: 179.423494ms)","trace[1054755025] 'applied index is now lower than readState.Index'  (duration: 2.9µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T02:58:27.324463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.798696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-136200-m03\" ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2024-05-01T02:58:27.325782Z","caller":"traceutil/trace.go:171","msg":"trace[1458868258] range","detail":"{range_begin:/registry/minions/ha-136200-m03; range_end:; response_count:1; response_revision:1575; }","duration":"181.205807ms","start":"2024-05-01T02:58:27.144565Z","end":"2024-05-01T02:58:27.325771Z","steps":["trace[1458868258] 'agreement among raft nodes before linearized reading'  (duration: 179.804097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:27.325805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.295259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:58:27.327416Z","caller":"traceutil/trace.go:171","msg":"trace[1620131110] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1575; }","duration":"106.638269ms","start":"2024-05-01T02:58:27.220472Z","end":"2024-05-01T02:58:27.32711Z","steps":["trace[1620131110] 'agreement among raft nodes before linearized reading'  (duration: 105.303859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:28.207615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.283539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.28.217.218\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-01T02:58:28.20815Z","caller":"traceutil/trace.go:171","msg":"trace[526707853] range","detail":"{range_begin:/registry/masterleases/172.28.217.218; range_end:; response_count:1; response_revision:1578; }","duration":"227.827942ms","start":"2024-05-01T02:58:27.980307Z","end":"2024-05-01T02:58:28.208135Z","steps":["trace[526707853] 'range keys from in-memory index tree'  (duration: 226.16143ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:58:33.155687Z","caller":"traceutil/trace.go:171","msg":"trace[822609576] linearizableReadLoop","detail":"{readStateIndex:1773; appliedIndex:1773; }","duration":"127.106614ms","start":"2024-05-01T02:58:33.028561Z","end":"2024-05-01T02:58:33.155667Z","steps":["trace[822609576] 'read index received'  (duration: 127.096113ms)","trace[822609576] 'applied index is now lower than readState.Index'  (duration: 3.201µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T02:58:33.156309Z","caller":"traceutil/trace.go:171","msg":"trace[2144601308] transaction","detail":"{read_only:false; response_revision:1595; number_of_response:1; }","duration":"161.212759ms","start":"2024-05-01T02:58:32.995083Z","end":"2024-05-01T02:58:33.156296Z","steps":["trace[2144601308] 'process raft request'  (duration: 161.011858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:33.156653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.070121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-05-01T02:58:33.156711Z","caller":"traceutil/trace.go:171","msg":"trace[302833371] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1595; }","duration":"128.172822ms","start":"2024-05-01T02:58:33.02853Z","end":"2024-05-01T02:58:33.156702Z","steps":["trace[302833371] 'agreement among raft nodes before linearized reading'  (duration: 127.786619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:33.264542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.338328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ha-136200-m03\" ","response":"range_response_count:1 size:4512"}
	{"level":"info","ts":"2024-05-01T02:58:33.264603Z","caller":"traceutil/trace.go:171","msg":"trace[1479493783] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ha-136200-m03; range_end:; response_count:1; response_revision:1595; }","duration":"101.45723ms","start":"2024-05-01T02:58:33.163133Z","end":"2024-05-01T02:58:33.26459Z","steps":["trace[1479493783] 'agreement among raft nodes before linearized reading'  (duration: 89.079641ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:00:22.770623Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1078}
	{"level":"info","ts":"2024-05-01T03:00:22.882389Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1078,"took":"110.812232ms","hash":3849218282,"current-db-size-bytes":3649536,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-01T03:00:22.882504Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3849218282,"revision":1078,"compact-revision":-1}
	
	
	==> kernel <==
	 03:00:25 up 12 min,  0 users,  load average: 0.11, 0.35, 0.28
	Linux ha-136200 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c09511b7df64] <==
	I0501 02:59:42.094873       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 02:59:52.112453       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 02:59:52.112856       1 main.go:227] handling current node
	I0501 02:59:52.112878       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 02:59:52.112888       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 02:59:52.113400       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 02:59:52.113646       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:00:02.130491       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:00:02.130653       1 main.go:227] handling current node
	I0501 03:00:02.130669       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:00:02.130678       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:00:02.130809       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:00:02.130818       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:00:12.141702       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:00:12.141805       1 main.go:227] handling current node
	I0501 03:00:12.141831       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:00:12.141840       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:00:12.142079       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:00:12.142133       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:00:22.157555       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:00:22.158009       1 main.go:227] handling current node
	I0501 03:00:22.158227       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:00:22.158378       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:00:22.158834       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:00:22.158989       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ff4bf057093] <==
	Trace[670363995]: [511.709143ms] [511.709143ms] END
	I0501 02:54:22.977601       1 trace.go:236] Trace[1452834138]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,client:172.28.213.142,api-group:storage.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (01-May-2024 02:54:22.472) (total time: 504ms):
	Trace[1452834138]: ["Create etcd3" audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,key:/csinodes/ha-136200-m02,type:*storage.CSINode,resource:csinodes.storage.k8s.io 504ms (02:54:22.473)
	Trace[1452834138]:  ---"Txn call succeeded" 503ms (02:54:22.977)]
	Trace[1452834138]: [504.731076ms] [504.731076ms] END
	E0501 02:58:15.730056       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:58:15.730169       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:58:15.730071       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:58:15.731583       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:58:15.732500       1 timeout.go:142] post-timeout activity - time-elapsed: 2.647619ms, PATCH "/api/v1/namespaces/default/events/ha-136200-m03.17cb3e09c56bb983" result: <nil>
	E0501 02:59:25.456065       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61414: use of closed network connection
	E0501 02:59:26.016855       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61416: use of closed network connection
	E0501 02:59:26.743048       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61418: use of closed network connection
	E0501 02:59:27.423392       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61421: use of closed network connection
	E0501 02:59:28.036056       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61423: use of closed network connection
	E0501 02:59:28.618704       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61425: use of closed network connection
	E0501 02:59:29.166283       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61427: use of closed network connection
	E0501 02:59:29.771114       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61429: use of closed network connection
	E0501 02:59:30.328866       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61431: use of closed network connection
	E0501 02:59:31.360058       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61434: use of closed network connection
	E0501 02:59:41.926438       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61436: use of closed network connection
	E0501 02:59:42.497809       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61439: use of closed network connection
	E0501 02:59:53.089743       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61441: use of closed network connection
	E0501 02:59:53.660135       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61443: use of closed network connection
	E0501 03:00:04.225188       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61445: use of closed network connection
	
	
	==> kube-controller-manager [8fa3aa565b36] <==
	I0501 02:50:56.182254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.9µs"
	I0501 02:50:56.871742       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 02:50:58.734842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.702µs"
	I0501 02:50:58.815553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.110569ms"
	I0501 02:50:58.817069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.005µs"
	I0501 02:50:58.859853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.315916ms"
	I0501 02:50:58.862248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="191.304µs"
	I0501 02:54:21.439127       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m02\" does not exist"
	I0501 02:54:21.501101       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m02" podCIDRs=["10.244.1.0/24"]
	I0501 02:54:21.914883       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m02"
	I0501 02:58:14.901209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m03\" does not exist"
	I0501 02:58:14.933592       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m03" podCIDRs=["10.244.2.0/24"]
	I0501 02:58:16.990389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m03"
	I0501 02:59:18.914466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.158562ms"
	I0501 02:59:19.095324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.785078ms"
	I0501 02:59:19.461767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="365.331283ms"
	I0501 02:59:19.490263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.541695ms"
	I0501 02:59:19.490899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.9µs"
	I0501 02:59:21.446166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0501 02:59:21.996495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.097772ms"
	I0501 02:59:21.997082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.301µs"
	I0501 02:59:22.122170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.415164ms"
	I0501 02:59:22.122332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.3µs"
	I0501 02:59:22.485058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.861489ms"
	I0501 02:59:22.485150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.8µs"
	
	
	==> kube-proxy [562cd55ab970] <==
	I0501 02:50:44.069527       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:50:44.111745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.217.218"]
	I0501 02:50:44.171562       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:50:44.171703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:50:44.171730       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:50:44.178320       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:50:44.180232       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:50:44.180271       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:50:44.184544       1 config.go:192] "Starting service config controller"
	I0501 02:50:44.185913       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:50:44.186319       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:50:44.186555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:50:44.189915       1 config.go:319] "Starting node config controller"
	I0501 02:50:44.190110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:50:44.287624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:50:44.287761       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:50:44.290292       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6454ceb34ca] <==
	W0501 02:50:26.797411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:50:26.797624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:50:26.830216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.830267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:26.925545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:50:26.925605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:50:26.948130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.948245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.027771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:27.028119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.045542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:50:27.045577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:50:27.049002       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:50:27.049031       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:50:30.148462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 02:59:18.858485       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pc6wt" node="ha-136200-m03"
	E0501 02:59:18.859227       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" pod="default/busybox-fc5497c4f-pc6wt"
	E0501 02:59:18.932248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.932355       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10f52d20-5605-40b5-8875-ceb0cb5c2e53(default/busybox-fc5497c4f-6mlkh) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6mlkh"
	E0501 02:59:18.932383       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" pod="default/busybox-fc5497c4f-6mlkh"
	I0501 02:59:18.932412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.934021       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	E0501 02:59:18.934194       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b6febdff-c378-4d33-94ae-8b321071f921(default/busybox-fc5497c4f-2gr4g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-2gr4g"
	E0501 02:59:18.934386       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" pod="default/busybox-fc5497c4f-2gr4g"
	I0501 02:59:18.937753       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	
	
	==> kubelet <==
	May 01 02:55:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:55:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:55:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:56:29 ha-136200 kubelet[2230]: E0501 02:56:29.306774    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:56:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:56:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:56:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:56:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:57:29 ha-136200 kubelet[2230]: E0501 02:57:29.312912    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:57:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:57:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:57:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:57:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:58:29 ha-136200 kubelet[2230]: E0501 02:58:29.309678    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:58:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:58:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:58:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:58:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 02:59:18 ha-136200 kubelet[2230]: I0501 02:59:18.912462    2230 topology_manager.go:215] "Topology Admit Handler" podUID="10f52d20-5605-40b5-8875-ceb0cb5c2e53" podNamespace="default" podName="busybox-fc5497c4f-6mlkh"
	May 01 02:59:19 ha-136200 kubelet[2230]: I0501 02:59:19.030006    2230 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sph74\" (UniqueName: \"kubernetes.io/projected/10f52d20-5605-40b5-8875-ceb0cb5c2e53-kube-api-access-sph74\") pod \"busybox-fc5497c4f-6mlkh\" (UID: \"10f52d20-5605-40b5-8875-ceb0cb5c2e53\") " pod="default/busybox-fc5497c4f-6mlkh"
	May 01 02:59:29 ha-136200 kubelet[2230]: E0501 02:59:29.309733    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 02:59:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 02:59:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 02:59:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 02:59:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:00:16.877642    7880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200: (12.4992708s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-136200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (262.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-136200 -v=7 --alsologtostderr
E0501 03:01:34.965795   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:03:37.996542   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p ha-136200 -v=7 --alsologtostderr: exit status 90 (3m46.3464245s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-136200 as [worker]
	* Starting "ha-136200-m04" worker node in "ha-136200" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:00:40.313591   12884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:00:40.401590   12884 out.go:291] Setting OutFile to fd 1020 ...
	I0501 03:00:40.402539   12884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:00:40.402539   12884 out.go:304] Setting ErrFile to fd 760...
	I0501 03:00:40.402539   12884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:00:40.419611   12884 mustload.go:65] Loading cluster: ha-136200
	I0501 03:00:40.420774   12884 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:00:40.422183   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:00:42.600074   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:00:42.600192   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:42.600299   12884 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:00:42.602137   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:00:44.794338   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:00:44.794338   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:44.794338   12884 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:00:44.795910   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:00:47.016970   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:00:47.016970   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:47.016970   12884 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:00:47.017851   12884 api_server.go:166] Checking apiserver status ...
	I0501 03:00:47.036358   12884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:00:47.036358   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:00:49.255342   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:00:49.255535   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:49.255535   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:00:51.952654   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:00:51.952654   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:51.952654   12884 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 03:00:52.092493   12884 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.0559926s)
	I0501 03:00:52.107065   12884 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup
	W0501 03:00:52.137534   12884 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:00:52.157727   12884 ssh_runner.go:195] Run: ls
	I0501 03:00:52.167002   12884 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 03:00:52.179492   12884 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 03:00:52.185312   12884 out.go:177] * Adding node m04 to cluster ha-136200 as [worker]
	I0501 03:00:52.188298   12884 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:00:52.188298   12884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 03:00:52.193883   12884 out.go:177] * Starting "ha-136200-m04" worker node in "ha-136200" cluster
	I0501 03:00:52.196108   12884 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 03:00:52.196108   12884 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 03:00:52.196108   12884 cache.go:56] Caching tarball of preloaded images
	I0501 03:00:52.196912   12884 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 03:00:52.196912   12884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 03:00:52.196912   12884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 03:00:52.205575   12884 start.go:360] acquireMachinesLock for ha-136200-m04: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:00:52.206578   12884 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m04"
	I0501 03:00:52.206760   12884 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0501 03:00:52.206760   12884 start.go:125] createHost starting for "m04" (driver="hyperv")
	I0501 03:00:52.209729   12884 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:00:52.209729   12884 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 03:00:52.209729   12884 client.go:168] LocalClient.Create starting
	I0501 03:00:52.210381   12884 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 03:00:52.210381   12884 main.go:141] libmachine: Decoding PEM data...
	I0501 03:00:52.210979   12884 main.go:141] libmachine: Parsing certificate...
	I0501 03:00:52.211012   12884 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 03:00:52.211012   12884 main.go:141] libmachine: Decoding PEM data...
	I0501 03:00:52.211012   12884 main.go:141] libmachine: Parsing certificate...
	I0501 03:00:52.211012   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 03:00:54.234223   12884 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 03:00:54.234882   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:54.235014   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 03:00:56.049159   12884 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 03:00:56.049355   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:56.049355   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 03:00:57.638417   12884 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 03:00:57.638417   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:00:57.638823   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 03:01:01.486420   12884 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 03:01:01.486420   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:01.488818   12884 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:01:01.964795   12884 main.go:141] libmachine: Creating SSH key...
	I0501 03:01:02.292493   12884 main.go:141] libmachine: Creating VM...
	I0501 03:01:02.292493   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 03:01:05.349385   12884 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 03:01:05.350292   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:05.350476   12884 main.go:141] libmachine: Using switch "Default Switch"
	I0501 03:01:05.350476   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 03:01:07.236640   12884 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 03:01:07.236640   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:07.236640   12884 main.go:141] libmachine: Creating VHD
	I0501 03:01:07.236640   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 03:01:10.991854   12884 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4A578299-FFEB-4831-87ED-AB095940E480
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 03:01:10.991854   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:10.991854   12884 main.go:141] libmachine: Writing magic tar header
	I0501 03:01:10.991854   12884 main.go:141] libmachine: Writing SSH key tar header
	I0501 03:01:11.001555   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 03:01:14.258779   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:14.258779   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:14.259176   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\disk.vhd' -SizeBytes 20000MB
	I0501 03:01:16.925129   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:16.925338   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:16.925425   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m04 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 03:01:20.945814   12884 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m04 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 03:01:20.946640   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:20.946640   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m04 -DynamicMemoryEnabled $false
	I0501 03:01:23.346754   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:23.347227   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:23.347331   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m04 -Count 2
	I0501 03:01:25.667278   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:25.667313   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:25.667388   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m04 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\boot2docker.iso'
	I0501 03:01:28.414950   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:28.414950   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:28.414950   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m04 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\disk.vhd'
	I0501 03:01:31.203268   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:31.203268   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:31.203393   12884 main.go:141] libmachine: Starting VM...
	I0501 03:01:31.203393   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m04
	I0501 03:01:34.396145   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:34.397189   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:34.397343   12884 main.go:141] libmachine: Waiting for host to start...
	I0501 03:01:34.397591   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:01:36.769475   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:01:36.769475   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:36.769475   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:01:39.389617   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:39.389617   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:40.402503   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:01:42.637254   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:01:42.637254   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:42.637254   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:01:45.305835   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:45.305835   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:46.318043   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:01:48.607205   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:01:48.607205   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:48.607355   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:01:51.265355   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:51.265355   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:52.280804   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:01:54.534595   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:01:54.534595   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:54.534595   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:01:57.121711   12884 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:01:57.121711   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:01:58.135721   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:00.447606   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:00.447606   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:00.447705   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:03.102032   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:03.102098   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:03.102098   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:05.304901   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:05.305115   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:05.305115   12884 machine.go:94] provisionDockerMachine start ...
	I0501 03:02:05.305115   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:07.525683   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:07.526630   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:07.526630   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:10.186314   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:10.186314   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:10.194114   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:02:10.206743   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:02:10.206743   12884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:02:10.357907   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:02:10.357907   12884 buildroot.go:166] provisioning hostname "ha-136200-m04"
	I0501 03:02:10.357907   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:12.542937   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:12.543841   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:12.543918   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:15.207027   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:15.207027   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:15.214095   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:02:15.214399   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:02:15.214399   12884 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m04 && echo "ha-136200-m04" | sudo tee /etc/hostname
	I0501 03:02:15.390932   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m04
	
	I0501 03:02:15.390932   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:17.604366   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:17.604506   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:17.604612   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:20.278448   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:20.278448   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:20.284859   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:02:20.285738   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:02:20.285738   12884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:02:20.445095   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:02:20.445095   12884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 03:02:20.445095   12884 buildroot.go:174] setting up certificates
	I0501 03:02:20.445095   12884 provision.go:84] configureAuth start
	I0501 03:02:20.445095   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:22.601669   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:22.602432   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:22.602743   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:25.218386   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:25.218460   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:25.218564   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:27.393555   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:27.393555   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:27.394169   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:30.014394   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:30.014571   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:30.014571   12884 provision.go:143] copyHostCerts
	I0501 03:02:30.014778   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 03:02:30.015198   12884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 03:02:30.015198   12884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 03:02:30.015365   12884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 03:02:30.016507   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 03:02:30.017201   12884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 03:02:30.017281   12884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 03:02:30.017443   12884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 03:02:30.018243   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 03:02:30.018775   12884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 03:02:30.018775   12884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 03:02:30.019320   12884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 03:02:30.020088   12884 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m04 san=[127.0.0.1 172.28.217.174 ha-136200-m04 localhost minikube]
	I0501 03:02:30.354902   12884 provision.go:177] copyRemoteCerts
	I0501 03:02:30.369901   12884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:02:30.369901   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:32.574853   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:32.574853   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:32.575477   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:35.203870   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:35.204935   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:35.204935   12884 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:02:35.310879   12884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9407893s)
	I0501 03:02:35.310879   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 03:02:35.310879   12884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:02:35.362087   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 03:02:35.362795   12884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 03:02:35.418015   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 03:02:35.419344   12884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:02:35.475425   12884 provision.go:87] duration metric: took 15.0302173s to configureAuth
	I0501 03:02:35.475489   12884 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:02:35.475732   12884 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:02:35.476300   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:37.686568   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:37.686653   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:37.686734   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:40.370565   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:40.371558   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:40.378562   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:02:40.378980   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:02:40.378980   12884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 03:02:40.522215   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 03:02:40.522215   12884 buildroot.go:70] root file system type: tmpfs
	I0501 03:02:40.522215   12884 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 03:02:40.522215   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:42.734939   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:42.734939   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:42.735787   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:45.401723   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:45.402042   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:45.409674   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:02:45.410295   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:02:45.413066   12884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 03:02:45.586237   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 03:02:45.586237   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:47.817885   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:47.817987   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:47.817987   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:50.573849   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:50.573849   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:50.580151   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:02:50.580687   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:02:50.580815   12884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 03:02:52.819873   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 03:02:52.820451   12884 machine.go:97] duration metric: took 47.5149793s to provisionDockerMachine
	I0501 03:02:52.820451   12884 client.go:171] duration metric: took 2m0.6098169s to LocalClient.Create
	I0501 03:02:52.820451   12884 start.go:167] duration metric: took 2m0.6098169s to libmachine.API.Create "ha-136200"
	I0501 03:02:52.820451   12884 start.go:293] postStartSetup for "ha-136200-m04" (driver="hyperv")
	I0501 03:02:52.820451   12884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:02:52.835125   12884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:02:52.835125   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:02:55.065354   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:02:55.065354   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:55.065419   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:02:57.740140   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:02:57.740913   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:02:57.741792   12884 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:02:57.848310   12884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0131466s)
	I0501 03:02:57.863310   12884 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:02:57.873239   12884 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:02:57.873339   12884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 03:02:57.873924   12884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 03:02:57.875111   12884 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 03:02:57.875111   12884 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 03:02:57.888872   12884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:02:57.907887   12884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 03:02:57.960150   12884 start.go:296] duration metric: took 5.1396607s for postStartSetup
	I0501 03:02:57.963345   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:03:00.151163   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:03:00.151897   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:00.151941   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:03:02.847682   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:03:02.847826   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:02.848965   12884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 03:03:02.855616   12884 start.go:128] duration metric: took 2m10.6478767s to createHost
	I0501 03:03:02.855771   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:03:05.080954   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:03:05.080954   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:05.080954   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:03:07.759461   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:03:07.759461   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:07.766512   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:03:07.766974   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:03:07.766974   12884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 03:03:07.902354   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532587.891580918
	
	I0501 03:03:07.902354   12884 fix.go:216] guest clock: 1714532587.891580918
	I0501 03:03:07.902354   12884 fix.go:229] Guest: 2024-05-01 03:03:07.891580918 +0000 UTC Remote: 2024-05-01 03:03:02.8556166 +0000 UTC m=+142.651290401 (delta=5.035964318s)
	I0501 03:03:07.902354   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:03:10.064282   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:03:10.064282   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:10.064282   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:03:12.695156   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:03:12.696245   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:12.703143   12884 main.go:141] libmachine: Using SSH client type: native
	I0501 03:03:12.703143   12884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:03:12.703143   12884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532587
	I0501 03:03:12.846555   12884 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 03:03:07 UTC 2024
	
	I0501 03:03:12.846686   12884 fix.go:236] clock set: Wed May  1 03:03:07 UTC 2024
	 (err=<nil>)
	I0501 03:03:12.846686   12884 start.go:83] releasing machines lock for "ha-136200-m04", held for 2m20.6390529s
	I0501 03:03:12.846983   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:03:15.053022   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:03:15.053022   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:15.053281   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:03:17.746929   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:03:17.746929   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:17.751184   12884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:03:17.751184   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:03:17.766430   12884 ssh_runner.go:195] Run: systemctl --version
	I0501 03:03:17.766430   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:03:20.049038   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:03:20.049173   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:20.049173   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:03:20.102322   12884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:03:20.102322   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:20.102322   12884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:03:22.846981   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:03:22.847827   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:22.848076   12884 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:03:22.874374   12884 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:03:22.874374   12884 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:03:22.874928   12884 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:03:22.958179   12884 ssh_runner.go:235] Completed: systemctl --version: (5.19171s)
	I0501 03:03:22.972776   12884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 03:03:23.197726   12884 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4465017s)
	W0501 03:03:23.197726   12884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:03:23.211738   12884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:03:23.242284   12884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:03:23.243282   12884 start.go:494] detecting cgroup driver to use...
	I0501 03:03:23.243580   12884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:03:23.298604   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 03:03:23.335424   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 03:03:23.359418   12884 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 03:03:23.373336   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 03:03:23.409916   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:03:23.447565   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 03:03:23.490649   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:03:23.529378   12884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:03:23.579649   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 03:03:23.620583   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 03:03:23.656654   12884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 03:03:23.696111   12884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:03:23.731469   12884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:03:23.766369   12884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:03:23.983620   12884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 03:03:24.024156   12884 start.go:494] detecting cgroup driver to use...
	I0501 03:03:24.039272   12884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 03:03:24.095695   12884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:03:24.139243   12884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:03:24.204193   12884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:03:24.247327   12884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:03:24.290510   12884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 03:03:24.358899   12884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:03:24.385106   12884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:03:24.442851   12884 ssh_runner.go:195] Run: which cri-dockerd
	I0501 03:03:24.464132   12884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 03:03:24.487430   12884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 03:03:24.540915   12884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 03:03:24.772714   12884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 03:03:24.987179   12884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 03:03:24.987179   12884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 03:03:25.038921   12884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:03:25.255415   12884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 03:04:26.398473   12884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1425281s)
	I0501 03:04:26.413327   12884 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0501 03:04:26.453673   12884 out.go:177] 
	W0501 03:04:26.457182   12884 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 03:02:51 ha-136200-m04 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:02:51 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:51.241701235Z" level=info msg="Starting up"
	May 01 03:02:51 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:51.243381190Z" level=info msg="containerd not running, starting managed containerd"
	May 01 03:02:51 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:51.250782589Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.286294825Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318236758Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318298456Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318384854Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318404653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318546649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318568249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319116434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319286129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319312528Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319326028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319443125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319894113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323315320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323446116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323682210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323852905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323999101Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.324259294Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.324365091Z" level=info msg="metadata content store policy set" policy=shared
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.347945751Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348196344Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348229543Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348250243Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348271142Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348445337Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348941424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349278115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349381212Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349406411Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349425111Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349445410Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349471710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349492309Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349515108Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349536508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349552307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349567707Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349593906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349613706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349631705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349649005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349663904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349680304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349695304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349750202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349774701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349793801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349809100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349824800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349841800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349861799Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349887098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349904398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349923497Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350010095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350034294Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350220489Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350251788Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350384185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350487782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350511081Z" level=info msg="NRI interface is disabled by configuration."
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350994168Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.351201063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.351385558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.351570053Z" level=info msg="containerd successfully booted in 0.067150s"
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.325905902Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.360582720Z" level=info msg="Loading containers: start."
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.664807313Z" level=info msg="Loading containers: done."
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.689929393Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.690079592Z" level=info msg="Daemon has completed initialization"
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.807816761Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 03:02:52 ha-136200-m04 systemd[1]: Started Docker Application Container Engine.
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.808368256Z" level=info msg="API listen on [::]:2376"
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.278969730Z" level=info msg="Processing signal 'terminated'"
	May 01 03:03:25 ha-136200-m04 systemd[1]: Stopping Docker Application Container Engine...
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281231068Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281703976Z" level=info msg="Daemon shutdown complete"
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281809677Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281812777Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 03:03:26 ha-136200-m04 systemd[1]: docker.service: Deactivated successfully.
	May 01 03:03:26 ha-136200-m04 systemd[1]: Stopped Docker Application Container Engine.
	May 01 03:03:26 ha-136200-m04 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:03:26 ha-136200-m04 dockerd[1018]: time="2024-05-01T03:03:26.361231399Z" level=info msg="Starting up"
	May 01 03:04:26 ha-136200-m04 dockerd[1018]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 03:04:26 ha-136200-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 03:04:26 ha-136200-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 03:04:26 ha-136200-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 03:02:51 ha-136200-m04 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:02:51 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:51.241701235Z" level=info msg="Starting up"
	May 01 03:02:51 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:51.243381190Z" level=info msg="containerd not running, starting managed containerd"
	May 01 03:02:51 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:51.250782589Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.286294825Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318236758Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318298456Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318384854Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318404653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318546649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.318568249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319116434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319286129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319312528Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319326028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319443125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.319894113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323315320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323446116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323682210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323852905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.323999101Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.324259294Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.324365091Z" level=info msg="metadata content store policy set" policy=shared
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.347945751Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348196344Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348229543Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348250243Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348271142Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348445337Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.348941424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349278115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349381212Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349406411Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349425111Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349445410Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349471710Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349492309Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349515108Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349536508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349552307Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349567707Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349593906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349613706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349631705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349649005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349663904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349680304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349695304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349750202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349774701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349793801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349809100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349824800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349841800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349861799Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349887098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349904398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.349923497Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350010095Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350034294Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350220489Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350251788Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350384185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350487782Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350511081Z" level=info msg="NRI interface is disabled by configuration."
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.350994168Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.351201063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.351385558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 03:02:51 ha-136200-m04 dockerd[668]: time="2024-05-01T03:02:51.351570053Z" level=info msg="containerd successfully booted in 0.067150s"
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.325905902Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.360582720Z" level=info msg="Loading containers: start."
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.664807313Z" level=info msg="Loading containers: done."
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.689929393Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.690079592Z" level=info msg="Daemon has completed initialization"
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.807816761Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 03:02:52 ha-136200-m04 systemd[1]: Started Docker Application Container Engine.
	May 01 03:02:52 ha-136200-m04 dockerd[662]: time="2024-05-01T03:02:52.808368256Z" level=info msg="API listen on [::]:2376"
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.278969730Z" level=info msg="Processing signal 'terminated'"
	May 01 03:03:25 ha-136200-m04 systemd[1]: Stopping Docker Application Container Engine...
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281231068Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281703976Z" level=info msg="Daemon shutdown complete"
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281809677Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 03:03:25 ha-136200-m04 dockerd[662]: time="2024-05-01T03:03:25.281812777Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 03:03:26 ha-136200-m04 systemd[1]: docker.service: Deactivated successfully.
	May 01 03:03:26 ha-136200-m04 systemd[1]: Stopped Docker Application Container Engine.
	May 01 03:03:26 ha-136200-m04 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:03:26 ha-136200-m04 dockerd[1018]: time="2024-05-01T03:03:26.361231399Z" level=info msg="Starting up"
	May 01 03:04:26 ha-136200-m04 dockerd[1018]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 03:04:26 ha-136200-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 03:04:26 ha-136200-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 03:04:26 ha-136200-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0501 03:04:26.457182   12884 out.go:239] * 
	* 
	W0501 03:04:26.486120   12884 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_8a500d2181d400fd32bfc5983efc601de14422c3_6.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_8a500d2181d400fd32bfc5983efc601de14422c3_6.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:04:26.489678   12884 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-windows-amd64.exe node add -p ha-136200 -v=7 --alsologtostderr" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200: (12.4700284s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 logs -n 25: (8.9654219s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-869300 image build -t     | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	|         | localhost/my-image:functional-869300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-869300 image ls           | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	| delete  | -p functional-869300                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:46 UTC | 01 May 24 02:47 UTC |
	| start   | -p ha-136200 --wait=true             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:47 UTC | 01 May 24 02:58 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- apply -f             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- rollout status       | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-2gr4g -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-6mlkh -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-pc6wt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| node    | add -p ha-136200 -v=7                | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:00 UTC |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:47:19
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:47:19.308853    4712 out.go:291] Setting OutFile to fd 968 ...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.308853    4712 out.go:304] Setting ErrFile to fd 940...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.335053    4712 out.go:298] Setting JSON to false
	I0501 02:47:19.338050    4712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104693,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:47:19.338050    4712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:47:19.343676    4712 out.go:177] * [ha-136200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:47:19.347056    4712 notify.go:220] Checking for updates...
	I0501 02:47:19.349570    4712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:47:19.352627    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:47:19.356010    4712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:47:19.359527    4712 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:47:19.364982    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:47:19.368342    4712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:47:24.771909    4712 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:47:24.777503    4712 start.go:297] selected driver: hyperv
	I0501 02:47:24.777503    4712 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:47:24.777503    4712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:47:24.830749    4712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:47:24.832155    4712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:47:24.832679    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:47:24.832679    4712 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:47:24.832679    4712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:47:24.832944    4712 start.go:340] cluster config:
	{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:47:24.832944    4712 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:47:24.837439    4712 out.go:177] * Starting "ha-136200" primary control-plane node in "ha-136200" cluster
	I0501 02:47:24.839631    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:47:24.839631    4712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:47:24.839631    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:47:24.840411    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:47:24.840411    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:47:24.841147    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:47:24.841147    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json: {Name:mk622c10e63d8ff69d285ce96c3e57bfbed6a54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:360] acquireMachinesLock for ha-136200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200"
	I0501 02:47:24.843334    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:47:24.843334    4712 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:47:24.845982    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:47:24.845982    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:47:24.845982    4712 client.go:168] LocalClient.Create starting
	I0501 02:47:24.847217    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.847705    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:47:24.848012    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.848048    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.848190    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:47:27.058462    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:47:27.058678    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:27.058786    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:30.441172    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:34.074968    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:34.075096    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:34.077782    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:47:34.612276    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: Creating VM...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:37.663991    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:37.664390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:37.664515    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:47:37.664599    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:39.498071    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:39.498234    4712 main.go:141] libmachine: Creating VHD
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:47:43.230384    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2B9E163F-083E-4714-9C44-9A52BE438E53
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:47:43.231369    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:43.231468    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:47:43.231550    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:47:43.241482    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -SizeBytes 20000MB
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:48.971981    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:47:52.766292    4712 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-136200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:47:52.766504    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:52.766592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200 -DynamicMemoryEnabled $false
	I0501 02:47:54.972628    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200 -Count 2
	I0501 02:47:57.167635    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\boot2docker.iso'
	I0501 02:47:59.728585    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd'
	I0501 02:48:02.387014    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: Starting VM...
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:07.690543    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:11.244005    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:13.447532    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:17.014251    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:19.231015    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:22.791035    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:24.970362    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:24.970583    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:24.970826    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:27.538082    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:27.539108    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:28.540044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:30.692065    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:33.315400    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:35.454723    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:48:35.454940    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:37.590850    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:37.591294    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:37.591378    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:40.152942    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:40.153017    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:40.158939    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:40.170076    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:40.170076    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:48:40.311850    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:48:40.311938    4712 buildroot.go:166] provisioning hostname "ha-136200"
	I0501 02:48:40.312011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:42.388241    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:44.941487    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:44.942306    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:44.948681    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:44.949642    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:44.949718    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200 && echo "ha-136200" | sudo tee /etc/hostname
	I0501 02:48:45.123416    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200
	
	I0501 02:48:45.123490    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:47.248892    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:49.920164    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:49.920164    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:49.920749    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:48:50.089597    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:48:50.089597    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:48:50.089597    4712 buildroot.go:174] setting up certificates
	I0501 02:48:50.090153    4712 provision.go:84] configureAuth start
	I0501 02:48:50.090240    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:54.811881    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:59.487351    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:59.487402    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:59.487402    4712 provision.go:143] copyHostCerts
	I0501 02:48:59.487402    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:48:59.487402    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:48:59.487402    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:48:59.488365    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:48:59.489448    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:48:59.489632    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:48:59.490981    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:48:59.491187    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:48:59.492726    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200 san=[127.0.0.1 172.28.217.218 ha-136200 localhost minikube]
	I0501 02:48:59.577887    4712 provision.go:177] copyRemoteCerts
	I0501 02:48:59.596375    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:48:59.597286    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:01.699540    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:04.259427    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:04.371852    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7744315s)
	I0501 02:49:04.371852    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:49:04.371852    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:49:04.422302    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:49:04.422602    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:49:04.478176    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:49:04.478176    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:49:04.532091    4712 provision.go:87] duration metric: took 14.4416362s to configureAuth
	I0501 02:49:04.532091    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:49:04.532690    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:49:04.532690    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:06.624197    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:09.238280    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:09.238979    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:09.245381    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:09.246060    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:09.246060    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:49:09.397759    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:49:09.397835    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:49:09.398290    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:49:09.398464    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:11.514582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:14.057033    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:14.057033    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:14.057589    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:49:14.242724    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:49:14.242724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:16.392749    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:18.993701    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:18.994302    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:19.000048    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:19.000537    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:19.000616    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:49:21.256124    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:49:21.256675    4712 machine.go:97] duration metric: took 45.8016127s to provisionDockerMachine
	I0501 02:49:21.256675    4712 client.go:171] duration metric: took 1m56.4098314s to LocalClient.Create
	I0501 02:49:21.256737    4712 start.go:167] duration metric: took 1m56.4098939s to libmachine.API.Create "ha-136200"
	I0501 02:49:21.256791    4712 start.go:293] postStartSetup for "ha-136200" (driver="hyperv")
	I0501 02:49:21.256828    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:49:21.271031    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:49:21.271031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:23.374454    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:23.374634    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:23.374716    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:25.919441    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:26.030251    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.759185s)
	I0501 02:49:26.044496    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:49:26.053026    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:49:26.053160    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:49:26.053600    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:49:26.054397    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:49:26.054397    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:49:26.070942    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:49:26.091568    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:49:26.143252    4712 start.go:296] duration metric: took 4.8863885s for postStartSetup
	I0501 02:49:26.147080    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:30.792456    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:49:30.796310    4712 start.go:128] duration metric: took 2m5.952044s to createHost
	I0501 02:49:30.796483    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:32.880619    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:35.468747    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:35.469470    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:35.469470    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531775.614259884
	
	I0501 02:49:35.611947    4712 fix.go:216] guest clock: 1714531775.614259884
	I0501 02:49:35.611947    4712 fix.go:229] Guest: 2024-05-01 02:49:35.614259884 +0000 UTC Remote: 2024-05-01 02:49:30.7963907 +0000 UTC m=+131.677772001 (delta=4.817869184s)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:40.253738    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:40.254896    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:40.261655    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:40.262498    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:40.262498    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531775
	I0501 02:49:40.415406    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:49:35 UTC 2024
	
	I0501 02:49:40.415406    4712 fix.go:236] clock set: Wed May  1 02:49:35 UTC 2024
	 (err=<nil>)
	I0501 02:49:40.415406    4712 start.go:83] releasing machines lock for "ha-136200", held for 2m15.5712031s
	I0501 02:49:40.416105    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:42.459145    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:45.033478    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:45.034063    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:45.038366    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:49:45.038515    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:45.050350    4712 ssh_runner.go:195] Run: cat /version.json
	I0501 02:49:45.050350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.230427    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:47.254252    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.254469    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.254558    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:49.922691    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.922867    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.923261    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.951021    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:50.022867    4712 ssh_runner.go:235] Completed: cat /version.json: (4.9724804s)
	I0501 02:49:50.037446    4712 ssh_runner.go:195] Run: systemctl --version
	I0501 02:49:50.123463    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0850592s)
	I0501 02:49:50.137756    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:49:50.147834    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:49:50.164262    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:49:50.197825    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:49:50.197877    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.197877    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:50.246918    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:49:50.281929    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:49:50.303725    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:49:50.317480    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:49:50.354607    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.392927    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:49:50.426684    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.464924    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:49:50.501540    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:49:50.541276    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:49:50.576278    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:49:50.614209    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:49:50.653144    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:49:50.688395    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:50.921067    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:49:50.960389    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.974435    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:49:51.020319    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.063731    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:49:51.113242    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.154151    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.196182    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:49:51.267621    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.297018    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:51.359019    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:49:51.382845    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:49:51.408532    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:49:51.459482    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:49:51.703156    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:49:51.928842    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:49:51.928842    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:49:51.985157    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:52.205484    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:49:54.768628    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5631253s)
	I0501 02:49:54.782717    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:49:54.821909    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:54.861989    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:49:55.097455    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:49:55.325878    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.547674    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:49:55.604800    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:55.648909    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.873886    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:49:55.987252    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:49:56.000254    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:49:56.009412    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:49:56.021925    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:49:56.041055    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:49:56.111426    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:49:56.124879    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.172644    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.210144    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:49:56.210144    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:49:56.231590    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:49:56.237056    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:49:56.273064    4712 kubeadm.go:877] updating cluster {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:49:56.273064    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:49:56.283976    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:49:56.305563    4712 docker.go:685] Got preloaded images: 
	I0501 02:49:56.305585    4712 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:49:56.319781    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:49:56.352980    4712 ssh_runner.go:195] Run: which lz4
	I0501 02:49:56.361434    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:49:56.376111    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:49:56.383203    4712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:49:56.383203    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:49:58.545920    4712 docker.go:649] duration metric: took 2.1838816s to copy over tarball
	I0501 02:49:58.559153    4712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:50:07.024882    4712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4656661s)
	I0501 02:50:07.024882    4712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:50:07.091273    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:50:07.117701    4712 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:50:07.169927    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:07.413870    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:50:10.777827    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.363932s)
	I0501 02:50:10.787955    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:50:10.813130    4712 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:50:10.813237    4712 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:50:10.813237    4712 kubeadm.go:928] updating node { 172.28.217.218 8443 v1.30.0 docker true true} ...
	I0501 02:50:10.813471    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.217.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:50:10.824528    4712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:50:10.865306    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:10.865306    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:10.865306    4712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:50:10.865306    4712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.217.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-136200 NodeName:ha-136200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.217.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.217.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:50:10.866013    4712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.217.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-136200"
	  kubeletExtraArgs:
	    node-ip: 172.28.217.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.217.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:50:10.866164    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:50:10.879856    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:50:10.916330    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:50:10.916590    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:50:10.930144    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:50:10.946847    4712 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:50:10.960617    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:50:10.980126    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 02:50:11.015010    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:50:11.046356    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:50:11.090122    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:50:11.151082    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:50:11.158193    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:50:11.198290    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:11.421704    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:50:11.457294    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.217.218
	I0501 02:50:11.457383    4712 certs.go:194] generating shared ca certs ...
	I0501 02:50:11.457383    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.458373    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:50:11.458865    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:50:11.459136    4712 certs.go:256] generating profile certs ...
	I0501 02:50:11.459821    4712 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:50:11.459950    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt with IP's: []
	I0501 02:50:11.600094    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt ...
	I0501 02:50:11.600094    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt: {Name:mkd5e4d205a603f84158daca3df4537a47f4507f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.601407    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key ...
	I0501 02:50:11.601407    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key: {Name:mk0f41aeab078751e43122e83e5a087fadc50acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.602800    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6
	I0501 02:50:11.602800    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.223.254]
	I0501 02:50:11.735634    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 ...
	I0501 02:50:11.735634    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6: {Name:mk25daf93f931731761fc26133f1d14b1615ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.736662    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 ...
	I0501 02:50:11.736662    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6: {Name:mk2e8ec633a20ca6bf6f004cdd1aa2dc02923e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.738036    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:50:11.750002    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:50:11.751999    4712 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:50:11.751999    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt with IP's: []
	I0501 02:50:11.858892    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt ...
	I0501 02:50:11.858892    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt: {Name:mk545c7bac57fe0475c68dabf35cf7726f7ba6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.860058    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key ...
	I0501 02:50:11.860058    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key: {Name:mk197c02f3ddea53477a395060c41fac8b486d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.861502    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:50:11.862042    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:50:11.862321    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:50:11.872340    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:50:11.872340    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:50:11.873220    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:50:11.874220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:50:11.875212    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:11.877013    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:50:11.928037    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:50:11.975033    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:50:12.024768    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:50:12.069813    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:50:12.117563    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:50:12.166940    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:50:12.214744    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:50:12.264780    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:50:12.314494    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:50:12.357210    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:50:12.407402    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:50:12.460345    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:50:12.486641    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:50:12.524534    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.531940    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.545887    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.569538    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:50:12.603111    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:50:12.640545    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.648489    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.664745    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.689236    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:50:12.722220    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:50:12.763152    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.771274    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.785811    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.809601    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:50:12.843815    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:50:12.851182    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:50:12.851596    4712 kubeadm.go:391] StartCluster: {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:50:12.861439    4712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:50:12.897822    4712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:50:12.930863    4712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:50:12.967142    4712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:50:12.989079    4712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:50:12.989174    4712 kubeadm.go:156] found existing configuration files:
	
	I0501 02:50:13.002144    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:50:13.022983    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:50:13.037263    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:50:13.070061    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:50:13.088170    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:50:13.104788    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:50:13.142331    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.161295    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:50:13.176372    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.217242    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:50:13.236623    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:50:13.250242    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:50:13.273719    4712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:50:13.796086    4712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:50:29.771938    4712 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:50:29.772562    4712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:50:29.772731    4712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:50:29.772731    4712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:50:29.775841    4712 out.go:204]   - Generating certificates and keys ...
	I0501 02:50:29.775841    4712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:50:29.776550    4712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:50:29.776704    4712 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:50:29.776918    4712 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:50:29.777081    4712 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:50:29.777841    4712 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.778067    4712 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:50:29.778150    4712 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:50:29.778250    4712 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:50:29.778341    4712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:50:29.778421    4712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:50:29.778724    4712 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:50:29.778804    4712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:50:29.778987    4712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:50:29.779083    4712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:50:29.779174    4712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:50:29.779418    4712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:50:29.781433    4712 out.go:204]   - Booting up control plane ...
	I0501 02:50:29.781433    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:50:29.781986    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:50:29.782154    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:50:29.782509    4712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:50:29.782778    4712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:50:29.782833    4712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:50:29.783188    4712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:50:29.783366    4712 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:50:29.783611    4712 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.012148578s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] The API server is healthy after 9.161500426s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:50:29.784343    4712 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:50:29.784449    4712 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:50:29.784907    4712 kubeadm.go:309] [mark-control-plane] Marking the node ha-136200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:50:29.785014    4712 kubeadm.go:309] [bootstrap-token] Using token: bebbcj.jj3pub0bsd9tcu95
	I0501 02:50:29.789897    4712 out.go:204]   - Configuring RBAC rules ...
	I0501 02:50:29.789897    4712 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:50:29.791324    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:50:29.791589    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:50:29.791711    4712 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:50:29.791958    4712 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.791958    4712 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:50:29.793244    4712 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:50:29.793244    4712 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:50:29.793868    4712 kubeadm.go:309] 
	I0501 02:50:29.794174    4712 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:50:29.794395    4712 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:50:29.794428    4712 kubeadm.go:309] 
	I0501 02:50:29.794531    4712 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--control-plane 
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.795522    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:50:29.795582    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:29.795642    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:29.798321    4712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:50:29.815739    4712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:50:29.823882    4712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:50:29.823882    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:50:29.880076    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:50:30.703674    4712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200 minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=true
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:30.736553    4712 ops.go:34] apiserver oom_adj: -16
	I0501 02:50:30.914646    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.422356    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.924569    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.422489    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.916374    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.419951    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.922300    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.426730    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.915815    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.415601    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.917473    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.419572    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.923752    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.424859    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.926096    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.415957    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.915894    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.417286    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.917110    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.418538    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.919363    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.420336    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.914423    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:42.068730    4712 kubeadm.go:1107] duration metric: took 11.364941s to wait for elevateKubeSystemPrivileges
	W0501 02:50:42.068870    4712 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:50:42.068934    4712 kubeadm.go:393] duration metric: took 29.2171223s to StartCluster
	I0501 02:50:42.069035    4712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.069065    4712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:42.070934    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.072021    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:50:42.072021    4712 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:42.072021    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:50:42.072021    4712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:50:42.072021    4712 addons.go:69] Setting storage-provisioner=true in profile "ha-136200"
	I0501 02:50:42.072578    4712 addons.go:234] Setting addon storage-provisioner=true in "ha-136200"
	I0501 02:50:42.072715    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:42.072765    4712 addons.go:69] Setting default-storageclass=true in profile "ha-136200"
	I0501 02:50:42.072820    4712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-136200"
	I0501 02:50:42.073003    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:42.073773    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.074332    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.237653    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:50:42.682536    4712 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.325924    4712 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:50:44.323327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.325924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.328653    4712 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:44.328653    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:50:44.328653    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:44.329300    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:44.330211    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:50:44.331266    4712 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:50:44.331692    4712 addons.go:234] Setting addon default-storageclass=true in "ha-136200"
	I0501 02:50:44.331692    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:44.332839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.573962    4712 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:46.573962    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:50:46.573962    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.693061    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.693131    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.693256    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:48.834701    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:49.381777    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:49.540602    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:51.475208    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:51.629340    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:51.811276    4712 round_trippers.go:463] GET https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:50:51.811902    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.811902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.811902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.826458    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:50:51.827458    4712 round_trippers.go:463] PUT https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:50:51.827458    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Content-Type: application/json
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.827458    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.831221    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:50:51.834740    4712 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:50:51.838052    4712 addons.go:505] duration metric: took 9.7659586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:50:51.838052    4712 start.go:245] waiting for cluster config update ...
	I0501 02:50:51.838052    4712 start.go:254] writing updated cluster config ...
	I0501 02:50:51.841165    4712 out.go:177] 
	I0501 02:50:51.854479    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:51.854479    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.861940    4712 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 02:50:51.865640    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:50:51.865640    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:50:51.865640    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:50:51.866174    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:50:51.866462    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.868358    4712 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:50:51.868358    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 02:50:51.869005    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:51.869070    4712 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 02:50:51.871919    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:50:51.872184    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:50:51.872184    4712 client.go:168] LocalClient.Create starting
	I0501 02:50:51.872730    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:53.846893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:57.208630    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:00.949038    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:51:01.496342    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: Creating VM...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:05.182621    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:51:05.182621    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:51:07.021151    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:51:07.022208    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:07.022208    4712 main.go:141] libmachine: Creating VHD
	I0501 02:51:07.022261    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:51:10.800515    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5C7D5B1-6A19-4B92-8073-0E024A878A77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:51:10.800843    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:51:10.813657    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:14.013713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -SizeBytes 20000MB
	I0501 02:51:16.613734    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:16.613973    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:16.614122    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m02 -DynamicMemoryEnabled $false
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:22.596839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m02 -Count 2
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\boot2docker.iso'
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:27.310044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd'
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: Starting VM...
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:35.347158    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:37.880551    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:37.881580    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:38.889792    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:41.091533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:43.621201    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:43.621813    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:44.622350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:50.423751    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:52.634051    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:56.229253    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:58.425395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:01.089224    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:03.247035    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:03.247253    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:03.247291    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:52:03.247449    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:05.493082    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:08.085777    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:08.101463    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:08.101463    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:52:08.244557    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:52:08.244557    4712 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 02:52:08.244557    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:10.396068    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:12.975111    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:12.975111    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:12.975111    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 02:52:13.142328    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 02:52:13.142479    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:17.993085    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:17.993267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:18.000242    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:18.000687    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:18.000687    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:52:18.164116    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:52:18.164116    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:52:18.164235    4712 buildroot.go:174] setting up certificates
	I0501 02:52:18.164235    4712 provision.go:84] configureAuth start
	I0501 02:52:18.164235    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:20.323803    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:20.324816    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:20.324954    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:25.037258    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:25.038231    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:25.038262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:27.637529    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:27.638462    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:27.638462    4712 provision.go:143] copyHostCerts
	I0501 02:52:27.638661    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:52:27.638979    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:52:27.639093    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:52:27.639613    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:52:27.640827    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:52:27.641053    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:52:27.642372    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:52:27.642643    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:52:27.642762    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:52:27.643264    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:52:27.644242    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.213.142 ha-136200-m02 localhost minikube]
	I0501 02:52:27.843189    4712 provision.go:177] copyRemoteCerts
	I0501 02:52:27.855361    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:52:27.855361    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:29.953607    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:32.549323    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:32.549356    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:32.549913    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:32.667202    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8118058s)
	I0501 02:52:32.667353    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:52:32.667882    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:52:32.721631    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:52:32.721631    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:52:32.771533    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:52:32.772177    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:52:32.825532    4712 provision.go:87] duration metric: took 14.6610374s to configureAuth
	I0501 02:52:32.825532    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:52:32.826094    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:52:32.826229    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:34.944371    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:37.500533    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:37.500590    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:37.506891    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:37.507395    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:37.507476    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:52:37.655757    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:52:37.655757    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:52:37.655757    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:52:37.656297    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:39.803012    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:42.365445    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:42.366335    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:42.372086    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:42.372086    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:42.372086    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:52:42.560633    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:52:42.560698    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:44.724351    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:47.350624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:47.350694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:47.356560    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:47.356887    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:47.356887    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:52:49.658910    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:52:49.658910    4712 machine.go:97] duration metric: took 46.4112065s to provisionDockerMachine
	I0501 02:52:49.659442    4712 client.go:171] duration metric: took 1m57.7858628s to LocalClient.Create
	I0501 02:52:49.659442    4712 start.go:167] duration metric: took 1m57.786395s to libmachine.API.Create "ha-136200"
	I0501 02:52:49.659503    4712 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 02:52:49.659537    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:52:49.675636    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:52:49.675636    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:51.837386    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:54.474409    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:54.475041    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:54.475353    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:54.588525    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9128536s)
	I0501 02:52:54.605879    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:52:54.614578    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:52:54.614578    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:52:54.615019    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:52:54.615983    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:52:54.616061    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:52:54.630716    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:52:54.652380    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:52:54.707179    4712 start.go:296] duration metric: took 5.0475618s for postStartSetup
	I0501 02:52:54.709671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:56.858662    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:59.468337    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:59.468783    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:59.468965    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:52:59.470910    4712 start.go:128] duration metric: took 2m7.6009059s to createHost
	I0501 02:52:59.471488    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:01.642528    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:04.224906    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:04.225471    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:04.225635    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:53:04.373720    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531984.377348123
	
	I0501 02:53:04.373720    4712 fix.go:216] guest clock: 1714531984.377348123
	I0501 02:53:04.373720    4712 fix.go:229] Guest: 2024-05-01 02:53:04.377348123 +0000 UTC Remote: 2024-05-01 02:52:59.4709109 +0000 UTC m=+340.350757801 (delta=4.906437223s)
	I0501 02:53:04.373851    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:06.540324    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:09.211685    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:09.212163    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:09.212163    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531984
	I0501 02:53:09.386381    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:53:04 UTC 2024
	
	I0501 02:53:09.386381    4712 fix.go:236] clock set: Wed May  1 02:53:04 UTC 2024
	 (err=<nil>)
	I0501 02:53:09.386381    4712 start.go:83] releasing machines lock for "ha-136200-m02", held for 2m17.5170158s
	I0501 02:53:09.386381    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:11.545938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:14.175393    4712 out.go:177] * Found network options:
	I0501 02:53:14.178428    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.181305    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.183961    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.186016    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:53:14.186987    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.190185    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:53:14.190185    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:14.201210    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:53:14.201210    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:16.404382    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:19.202467    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.202936    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.203019    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.238045    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.238494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.238494    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.303673    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1023631s)
	W0501 02:53:19.303730    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:53:19.322303    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:53:19.425813    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.234512s)
	I0501 02:53:19.425813    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:53:19.425869    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:19.426179    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:19.480110    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:53:19.516304    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:53:19.540429    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:53:19.554725    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:53:19.592793    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.638122    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:53:19.676636    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.716798    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:53:19.755079    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:53:19.792962    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:53:19.828507    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:53:19.864630    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:53:19.900003    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:53:19.933687    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:20.164043    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:53:20.200981    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:20.214486    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:53:20.252522    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.291404    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:53:20.342446    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.384719    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.433485    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:53:20.493558    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.521863    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:20.572266    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:53:20.592650    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:53:20.612894    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:53:20.662972    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:53:20.893661    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:53:21.103995    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:53:21.104092    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:53:21.153897    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:21.367769    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:53:23.926290    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5584356s)
	I0501 02:53:23.942886    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:53:23.985733    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.029327    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:53:24.262777    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:53:24.474412    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:24.701708    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:53:24.747995    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.789968    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:25.013627    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:53:25.132301    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:53:25.147412    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:53:25.161719    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:53:25.177972    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:53:25.198441    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:53:25.257309    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:53:25.270183    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.317675    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.366446    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:53:25.369267    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:53:25.371205    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:53:25.380319    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:53:25.380407    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:53:25.393209    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:53:25.400057    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:25.423648    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:53:25.424611    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:53:25.425544    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:27.528822    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:27.530295    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.213.142
	I0501 02:53:27.530371    4712 certs.go:194] generating shared ca certs ...
	I0501 02:53:27.530371    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.531276    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:53:27.531739    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:53:27.531846    4712 certs.go:256] generating profile certs ...
	I0501 02:53:27.532594    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:53:27.532748    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12
	I0501 02:53:27.532985    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.223.254]
	I0501 02:53:27.709722    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 ...
	I0501 02:53:27.709722    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12: {Name:mk2a82749362965014fb3e2d8d662f7a4e7e9cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.711740    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 ...
	I0501 02:53:27.711740    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12: {Name:mkb73c4ed44f1dd1b8f82d46a1302578ac6f367b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.712120    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:53:27.726267    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:53:27.727349    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:53:27.728383    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:53:27.728582    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:53:27.728825    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:53:27.729015    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:53:27.729253    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:53:27.729653    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:53:27.729899    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:53:27.730538    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:53:27.730538    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:53:27.730927    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:53:27.731437    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:53:27.731866    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:53:27.732310    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:53:27.732905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:27.733131    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:53:27.733384    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:53:27.733671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:29.906678    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:32.470407    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:32.580880    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:53:32.588963    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:53:32.624993    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:53:32.635801    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:53:32.670832    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:53:32.678812    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:53:32.713791    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:53:32.721308    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:53:32.760244    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:53:32.767565    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:53:32.804387    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:53:32.811905    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:53:32.832394    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:53:32.885891    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:53:32.936137    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:53:32.994824    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:53:33.054042    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:53:33.105998    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:53:33.156026    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:53:33.205426    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:53:33.264385    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:53:33.316776    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:53:33.368293    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:53:33.420965    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:53:33.458001    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:53:33.499072    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:53:33.534603    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:53:33.570373    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:53:33.602430    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:53:33.635495    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:53:33.684802    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:53:33.709070    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:53:33.743711    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.750970    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.765746    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.787709    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:53:33.828429    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:53:33.866546    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.874255    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.888580    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.910501    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:53:33.948720    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:53:33.993042    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.001989    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.015762    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.040058    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:53:34.077501    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:53:34.086036    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:53:34.086573    4712 kubeadm.go:928] updating node {m02 172.28.213.142 8443 v1.30.0 docker true true} ...
	I0501 02:53:34.086726    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.213.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:53:34.086726    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:53:34.101653    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:53:34.130866    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:53:34.131029    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:53:34.145238    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.165400    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:53:34.180369    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0501 02:53:35.468257    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.481254    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.488247    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:53:35.489247    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:53:35.546630    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.559624    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.626048    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:53:35.627145    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:53:36.028150    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:53:36.077312    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.090870    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.109939    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:53:36.111871    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:53:36.821139    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:53:36.843821    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:53:36.878070    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:53:36.917707    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:53:36.971960    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:53:36.979482    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:37.020702    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:37.250249    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:53:37.282989    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:37.299000    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:53:37.299000    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:53:37.299000    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:39.433070    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:42.012855    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:42.240815    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9416996s)
	I0501 02:53:42.240889    4712 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:53:42.240889    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443"
	I0501 02:54:27.807891    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443": (45.5666728s)
	I0501 02:54:27.808014    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:54:28.660694    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m02 minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:54:28.861404    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:54:29.035785    4712 start.go:318] duration metric: took 51.7364106s to joinCluster
	I0501 02:54:29.035979    4712 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:29.038999    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:54:29.036818    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:29.055991    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:54:29.482004    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:54:29.532870    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:54:29.534181    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:54:29.534386    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:54:29.535958    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:29.536236    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:29.536236    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:29.536236    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:29.536353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:29.609745    4712 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0501 02:54:30.045557    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.045557    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.045557    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.045557    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.051535    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:30.542020    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.542083    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.542148    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.542148    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.549047    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.050630    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.050694    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.050694    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.050694    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.063209    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:31.542025    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.542098    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.542098    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.542098    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.548667    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.549663    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:32.050097    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.050097    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.050174    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.050174    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.054568    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:32.542017    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.542017    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.542017    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.542017    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.546488    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:33.050866    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.050866    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.050866    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.050866    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.056451    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:33.538033    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.538033    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.538033    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.538033    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.713541    4712 round_trippers.go:574] Response Status: 200 OK in 175 milliseconds
	I0501 02:54:33.714615    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:34.041226    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.041226    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.041226    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.041226    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.047903    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:34.547454    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.547454    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.547757    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.547757    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.552099    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.046877    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.046877    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.046877    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.046877    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.052278    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:35.548463    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.548463    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.548740    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.548740    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.558660    4712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:54:35.560213    4712 node_ready.go:49] node "ha-136200-m02" has status "Ready":"True"
	I0501 02:54:35.560213    4712 node_ready.go:38] duration metric: took 6.0241453s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:35.560332    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:35.560422    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:35.560422    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.560422    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.560422    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.572050    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:54:35.581777    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.581924    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:54:35.581924    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.581924    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.581924    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.585770    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:35.587608    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.587685    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.587685    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.587685    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.591867    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.591867    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.591867    4712 pod_ready.go:81] duration metric: took 10.0903ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:54:35.591867    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.591867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.591867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.596249    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.597880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.597964    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.597964    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.597964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.602327    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.603007    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.603007    4712 pod_ready.go:81] duration metric: took 11.1397ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.603007    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.604166    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:54:35.604211    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.604211    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.604211    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.610508    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:35.611831    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.611831    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.611831    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.611831    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.627921    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:54:35.629498    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.629498    4712 pod_ready.go:81] duration metric: took 26.4906ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:35.629498    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.629498    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.629498    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.638393    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:35.638911    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.638911    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.638911    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.639550    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.643473    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:36.140037    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.140167    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.140167    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.140167    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.148123    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:36.149580    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.149580    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.149659    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.149659    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.155530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:36.644340    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.644340    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.644340    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.644340    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.651321    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:36.652588    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.653128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.653128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.653128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.660377    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.144534    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.144656    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.144656    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.144656    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.150598    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:37.152092    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.152665    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.152665    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.152665    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.160441    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.644104    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.644239    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.644239    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.644239    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.649836    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.650604    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.650671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.650671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.650671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.654947    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.656164    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:38.142505    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.142505    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.142505    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.142505    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.149100    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.151258    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.151347    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.151347    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.151347    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.155677    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:38.643186    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.643241    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.643241    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.643241    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.650578    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.651873    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.651902    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.651902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.651902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.655946    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.142681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.142681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.142681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.142681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.148315    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:39.149953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.150203    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.150203    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.150203    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.154771    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.643364    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.643364    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.643364    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.643364    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.649703    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:39.650947    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.650947    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.651009    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.651009    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.654949    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:39.656190    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:40.142428    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.142428    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.142676    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.142676    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.148562    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.149792    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.149792    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.149792    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.149792    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.154808    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.644095    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.644095    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.644095    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.644095    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.650441    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:40.651544    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.651598    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.651598    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.651598    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.662172    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:41.143094    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.143187    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.143187    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.143187    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.148870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.150018    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.150018    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.150018    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.150018    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.156810    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:41.640508    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.640624    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.640624    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.640624    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.646018    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.646730    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.647318    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.647318    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.647318    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.652880    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.139900    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.139985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.139985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.139985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.145577    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.146383    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.146383    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.146448    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.146448    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.151141    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.151862    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:42.639271    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.639271    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.639271    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.639271    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.642318    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.646671    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.646671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.646671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.646671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.651360    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.137151    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.137496    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.137496    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.137496    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.141750    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.142959    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.142959    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.142959    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.142959    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.147560    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.641950    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.641985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.641985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.641985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.647599    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.649299    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.649350    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.649350    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.649350    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.657034    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:43.658043    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.658043    4712 pod_ready.go:81] duration metric: took 8.0284866s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:54:43.658043    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.658043    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.658043    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.664394    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.664394    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.664394    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.668848    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.669848    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.669848    4712 pod_ready.go:81] duration metric: took 11.805ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:54:43.669848    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.669848    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.670830    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.674754    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:43.676699    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.676699    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.676699    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.676699    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.681632    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.683231    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.683231    4712 pod_ready.go:81] duration metric: took 13.3825ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683231    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:54:43.683412    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.683412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.683412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.688589    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.690255    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.690255    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.690325    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.690325    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.695853    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.696818    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.696860    4712 pod_ready.go:81] duration metric: took 13.6296ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696912    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696993    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:54:43.697029    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.697029    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.697029    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.701912    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.703032    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.703736    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.703736    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.703736    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.706383    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:54:43.707734    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.707824    4712 pod_ready.go:81] duration metric: took 10.9115ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.707824    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.845210    4712 request.go:629] Waited for 137.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.845681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.845681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.851000    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.047046    4712 request.go:629] Waited for 194.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.047200    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.047200    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.052548    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.053735    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.053735    4712 pod_ready.go:81] duration metric: took 345.9086ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.053735    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.250128    4712 request.go:629] Waited for 196.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.250128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.250128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.254761    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:44.456435    4712 request.go:629] Waited for 200.6839ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.456435    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.456435    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.461480    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.462518    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.462578    4712 pod_ready.go:81] duration metric: took 408.7057ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.462578    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.648779    4712 request.go:629] Waited for 185.8104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.648953    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.649128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.654457    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.855621    4712 request.go:629] Waited for 199.4812ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.855706    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.855706    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.861147    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.861147    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.861699    4712 pod_ready.go:81] duration metric: took 399.1179ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.861778    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.042766    4712 request.go:629] Waited for 180.9309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.042766    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.042766    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.047379    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:45.244553    4712 request.go:629] Waited for 197.0101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.244553    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.244553    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.250870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:45.252485    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:45.252485    4712 pod_ready.go:81] duration metric: took 390.7033ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.252547    4712 pod_ready.go:38] duration metric: took 9.6921442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:45.252619    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:54:45.266607    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:45.298538    4712 api_server.go:72] duration metric: took 16.2624407s to wait for apiserver process to appear ...
	I0501 02:54:45.298538    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:54:45.298642    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:54:45.308804    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:54:45.308804    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:54:45.308804    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.308804    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.308804    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.310764    4712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:54:45.311165    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:54:45.311238    4712 api_server.go:131] duration metric: took 12.7003ms to wait for apiserver health ...
	I0501 02:54:45.311238    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:54:45.446869    4712 request.go:629] Waited for 135.3903ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.446869    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.446869    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.455463    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:45.466055    4712 system_pods.go:59] 17 kube-system pods found
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.466055    4712 system_pods.go:74] duration metric: took 154.8157ms to wait for pod list to return data ...
	I0501 02:54:45.466055    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:54:45.650374    4712 request.go:629] Waited for 183.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.650566    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.650566    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.661544    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:45.662734    4712 default_sa.go:45] found service account: "default"
	I0501 02:54:45.662869    4712 default_sa.go:55] duration metric: took 196.812ms for default service account to be created ...
	I0501 02:54:45.662869    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:54:45.853192    4712 request.go:629] Waited for 189.9269ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.853419    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.853419    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.865601    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:45.872777    4712 system_pods.go:86] 17 kube-system pods found
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.873383    4712 system_pods.go:126] duration metric: took 210.5126ms to wait for k8s-apps to be running ...
	I0501 02:54:45.873383    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:54:45.886040    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:45.914966    4712 system_svc.go:56] duration metric: took 41.5829ms WaitForService to wait for kubelet
	I0501 02:54:45.915054    4712 kubeadm.go:576] duration metric: took 16.8789526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:54:45.915054    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:54:46.043164    4712 request.go:629] Waited for 127.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:46.043164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:46.043310    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:46.050320    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:105] duration metric: took 136.4457ms to run NodePressure ...
	I0501 02:54:46.051501    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:54:46.051501    4712 start.go:254] writing updated cluster config ...
	I0501 02:54:46.055981    4712 out.go:177] 
	I0501 02:54:46.073210    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:46.073681    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.079155    4712 out.go:177] * Starting "ha-136200-m03" control-plane node in "ha-136200" cluster
	I0501 02:54:46.082550    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:54:46.082550    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:54:46.083028    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:54:46.083223    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:54:46.083384    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.091748    4712 start.go:360] acquireMachinesLock for ha-136200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:54:46.091748    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m03"
	I0501 02:54:46.091748    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:46.091748    4712 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0501 02:54:46.099863    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:54:46.100178    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:54:46.100178    4712 client.go:168] LocalClient.Create starting
	I0501 02:54:46.100178    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101128    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:54:49.970242    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:51.553966    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:55.358013    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:54:55.879042    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: Creating VM...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:58.933270    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:54:58.933728    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:55:00.789675    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:00.789938    4712 main.go:141] libmachine: Creating VHD
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:55:04.583967    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AAB86B48-3D75-4842-8FF8-3BDEC4AB86C2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:55:04.584134    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:55:04.594277    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -SizeBytes 20000MB
	I0501 02:55:10.391210    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:10.391245    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:10.391352    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:14.151882    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m03 -DynamicMemoryEnabled $false
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:16.478022    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m03 -Count 2
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:18.717850    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\boot2docker.iso'
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd'
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:24.025533    4712 main.go:141] libmachine: Starting VM...
	I0501 02:55:24.025533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m03
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:27.131722    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:55:27.131722    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:29.452098    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:29.453035    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:29.453089    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:32.025441    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:32.026234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:33.036612    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:37.849230    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:37.849355    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:38.854379    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:44.621333    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:49.475402    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:49.476316    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:50.480573    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:52.724713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:55.379189    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:57.536246    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:55:57.536246    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:59.681292    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:59.681842    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:59.682021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:02.302435    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:02.303031    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:02.303031    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:56:02.440858    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:56:02.440919    4712 buildroot.go:166] provisioning hostname "ha-136200-m03"
	I0501 02:56:02.440919    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:04.541126    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:07.118513    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:07.119097    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:07.119097    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m03 && echo "ha-136200-m03" | sudo tee /etc/hostname
	I0501 02:56:07.274395    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m03
	
	I0501 02:56:07.274395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:09.427222    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:12.066151    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:12.066558    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:12.072701    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:12.073263    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:12.073263    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:56:12.226572    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:56:12.226572    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:56:12.226572    4712 buildroot.go:174] setting up certificates
	I0501 02:56:12.226572    4712 provision.go:84] configureAuth start
	I0501 02:56:12.226572    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:14.383697    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:14.383832    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:14.383916    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:17.017056    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:19.246383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:21.887343    4712 provision.go:143] copyHostCerts
	I0501 02:56:21.887688    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:56:21.887688    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:56:21.887688    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:56:21.888470    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:56:21.889606    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:56:21.890069    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:56:21.890132    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:56:21.890553    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:56:21.891611    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:56:21.891800    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:56:21.891800    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:56:21.892337    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:56:21.893162    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m03 san=[127.0.0.1 172.28.216.62 ha-136200-m03 localhost minikube]
	I0501 02:56:21.973101    4712 provision.go:177] copyRemoteCerts
	I0501 02:56:21.993116    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:56:21.993116    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:24.170031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:26.830749    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:26.831099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:26.831162    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:26.935784    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9426327s)
	I0501 02:56:26.935784    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:56:26.936266    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:56:26.985792    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:56:26.986191    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:56:27.035460    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:56:27.036450    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:56:27.092775    4712 provision.go:87] duration metric: took 14.8660953s to configureAuth
	I0501 02:56:27.092775    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:56:27.093873    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:56:27.094011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:29.214442    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:31.848020    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:31.848124    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:31.859047    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:31.859047    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:31.859047    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:56:31.983811    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:56:31.983936    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:56:31.984160    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:56:31.984160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:34.146837    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:36.793925    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:36.794747    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:36.801153    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:36.801782    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:36.801782    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	Environment="NO_PROXY=172.28.217.218,172.28.213.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:56:36.960579    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	Environment=NO_PROXY=172.28.217.218,172.28.213.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:56:36.960579    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:39.141800    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:41.765565    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:41.766216    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:41.774239    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:41.774411    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:41.774411    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:56:43.994230    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:56:43.994313    4712 machine.go:97] duration metric: took 46.4577313s to provisionDockerMachine
	I0501 02:56:43.994313    4712 client.go:171] duration metric: took 1m57.8932783s to LocalClient.Create
	I0501 02:56:43.994313    4712 start.go:167] duration metric: took 1m57.8932783s to libmachine.API.Create "ha-136200"
	I0501 02:56:43.994428    4712 start.go:293] postStartSetup for "ha-136200-m03" (driver="hyperv")
	I0501 02:56:43.994473    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:56:44.010383    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:56:44.010383    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:46.225048    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:46.225772    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:46.225844    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:48.919679    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:49.032380    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0219067s)
	I0501 02:56:49.045700    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:56:49.054180    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:56:49.054180    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:56:49.054700    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:56:49.055002    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:56:49.055574    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:56:49.071048    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:56:49.092423    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:56:49.143151    4712 start.go:296] duration metric: took 5.1486851s for postStartSetup
	I0501 02:56:49.146034    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:51.349851    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:51.350067    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:51.350153    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:54.016657    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:54.017149    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:54.017380    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:56:54.019460    4712 start.go:128] duration metric: took 2m7.9267809s to createHost
	I0501 02:56:54.019460    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:58.818618    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:58.819348    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:58.819348    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:56:58.949732    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532218.937413126
	
	I0501 02:56:58.949732    4712 fix.go:216] guest clock: 1714532218.937413126
	I0501 02:56:58.949732    4712 fix.go:229] Guest: 2024-05-01 02:56:58.937413126 +0000 UTC Remote: 2024-05-01 02:56:54.0194605 +0000 UTC m=+574.897601601 (delta=4.917952626s)
	I0501 02:56:58.949941    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:01.096436    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:03.657161    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:57:03.657803    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:57:03.657803    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532218
	I0501 02:57:03.807080    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:56:58 UTC 2024
	
	I0501 02:57:03.807174    4712 fix.go:236] clock set: Wed May  1 02:56:58 UTC 2024
	 (err=<nil>)
	I0501 02:57:03.807174    4712 start.go:83] releasing machines lock for "ha-136200-m03", held for 2m17.7144231s
	I0501 02:57:03.807438    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:08.605250    4712 out.go:177] * Found network options:
	I0501 02:57:08.607292    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.612307    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.619160    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:57:08.619160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:08.631565    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:57:08.631565    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:10.838360    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838735    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.624235    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.648439    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.648490    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.648768    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.732596    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1009937s)
	W0501 02:57:13.732596    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:57:13.748662    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:57:13.811529    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:57:13.811529    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:13.811529    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1923313s)
	I0501 02:57:13.812665    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:13.867675    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:57:13.906069    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:57:13.929632    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:57:13.947027    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:57:13.986248    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.024920    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:57:14.061978    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.099821    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:57:14.138543    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:57:14.181270    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:57:14.217808    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:57:14.261794    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:57:14.297051    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:57:14.332222    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:14.558529    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:57:14.595594    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:14.610122    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:57:14.650440    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.689246    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:57:14.740013    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.780524    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.822987    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:57:14.889904    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.919061    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:14.983590    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:57:15.008856    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:57:15.032703    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:57:15.086991    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:57:15.324922    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:57:15.542551    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:57:15.542551    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:57:15.594658    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:15.826063    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:57:18.399291    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5732092s)
	I0501 02:57:18.412657    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:57:18.452282    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:18.491033    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:57:18.702768    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:57:18.928695    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.145438    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:57:19.199070    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:19.242280    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.475811    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:57:19.598548    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:57:19.612590    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:57:19.624279    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:57:19.637235    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:57:19.657683    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:57:19.721351    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:57:19.734095    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.784976    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.822576    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:57:19.826041    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:57:19.827741    4712 out.go:177]   - env NO_PROXY=172.28.217.218,172.28.213.142
	I0501 02:57:19.831635    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:57:19.851676    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:57:19.858242    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:19.883254    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:57:19.883656    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:57:19.884140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:22.018331    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:22.018592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:22.018658    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:22.019393    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.216.62
	I0501 02:57:22.019393    4712 certs.go:194] generating shared ca certs ...
	I0501 02:57:22.019393    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.020318    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:57:22.020786    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:57:22.021028    4712 certs.go:256] generating profile certs ...
	I0501 02:57:22.021028    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:57:22.021606    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9
	I0501 02:57:22.021767    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.216.62 172.28.223.254]
	I0501 02:57:22.149544    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 ...
	I0501 02:57:22.149544    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9: {Name:mk4837fbdb29e34df2c44991c600cda784a93d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.150373    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 ...
	I0501 02:57:22.150373    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9: {Name:mkcff5432d26e17c25cf2a9709eb4553a31e99c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.152472    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:57:22.165924    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:57:22.166444    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:57:22.166444    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:57:22.167623    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:57:22.168122    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:57:22.168280    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:57:22.169490    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:57:22.169490    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:57:22.170595    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:57:22.170869    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:57:22.171164    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:57:22.171434    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:57:22.171670    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:57:22.172911    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:24.374904    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:26.980450    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:27.093857    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:57:27.102183    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:57:27.141690    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:57:27.150194    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:57:27.193806    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:57:27.202957    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:57:27.254044    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:57:27.262605    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:57:27.303214    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:57:27.310453    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:57:27.348966    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:57:27.356382    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:57:27.383468    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:57:27.437872    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:57:27.494095    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:57:27.544977    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:57:27.599083    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:57:27.652123    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:57:27.710792    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:57:27.766379    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:57:27.817284    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:57:27.867949    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:57:27.930560    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:57:27.987875    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:57:28.025174    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:57:28.061492    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:57:28.099323    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:57:28.133169    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:57:28.168585    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:57:28.223450    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:57:28.292690    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:57:28.315882    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:57:28.353000    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.365096    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.379858    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.406814    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:57:28.445706    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:57:28.482484    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.491120    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.507367    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.535421    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:57:28.574647    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:57:28.616757    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.624484    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.642485    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.665148    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:57:28.706630    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:57:28.714508    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:57:28.714998    4712 kubeadm.go:928] updating node {m03 172.28.216.62 8443 v1.30.0 docker true true} ...
	I0501 02:57:28.715189    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.216.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:57:28.715218    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:57:28.727524    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:57:28.767475    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:57:28.767631    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:57:28.783398    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.801741    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:57:28.815792    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:57:28.838101    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:57:28.838226    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:57:28.838396    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.855124    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:57:28.856182    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.858128    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.881905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.881905    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:57:28.882027    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:57:28.882165    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:57:28.882277    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:57:28.898781    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.959439    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:57:28.959688    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:57:30.251192    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:57:30.272192    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:57:30.311119    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:57:30.353248    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:57:30.407414    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:57:30.415360    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:30.454450    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:30.696464    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:57:30.737201    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:30.801844    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:57:30.802126    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:57:30.802234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:32.962279    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:35.601356    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:35.838006    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0358438s)
	I0501 02:57:35.838006    4712 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:57:35.838006    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443"
	I0501 02:58:21.819619    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443": (45.981274s)
	I0501 02:58:21.819619    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:58:22.593318    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m03 minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:58:22.788566    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:58:22.987611    4712 start.go:318] duration metric: took 52.1853822s to joinCluster
	I0501 02:58:22.987895    4712 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:58:23.012496    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:58:22.988142    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:58:23.031751    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:58:23.569395    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:58:23.619961    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:58:23.620228    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:58:23.620770    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:58:23.621670    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:23.621910    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:23.621910    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:23.621982    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:23.621982    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:23.637444    4712 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:58:24.133658    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.133658    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.133658    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.133658    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.139465    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:24.622867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.622867    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.622867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.622867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.629524    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.129429    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.129429    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.129510    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.129510    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.135754    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.633954    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.633954    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.633954    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.633954    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.638650    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:25.639656    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:26.123894    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.123894    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.123894    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.123894    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.129103    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:26.629161    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.629161    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.629161    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.629161    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.648167    4712 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 02:58:27.136028    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.136028    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.136028    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.136028    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.326021    4712 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0501 02:58:27.623480    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.623600    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.623600    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.623600    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.629035    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:28.136433    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.136433    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.136626    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.136626    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.203923    4712 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0501 02:58:28.205553    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:28.636021    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.636185    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.636185    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.636185    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.646735    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:29.122451    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.122515    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.122515    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.122515    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.140826    4712 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:58:29.629756    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.629756    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.629756    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.629756    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.637588    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:30.132174    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.132269    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.132269    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.132269    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.136921    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:30.632939    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.633022    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.633022    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.633022    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.638815    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:30.640044    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:31.133378    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.133378    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.133378    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.133378    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.138754    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:31.633444    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.633511    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.633511    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.633511    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.639686    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:32.131317    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.131317    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.131317    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.131317    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.136414    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:32.629649    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.629649    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.629649    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.629649    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.634980    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:33.129878    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.129878    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.129878    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.129878    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.155125    4712 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0501 02:58:33.156557    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:33.629865    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.630060    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.630060    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.630060    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.636368    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:34.128412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.128412    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.128412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.128412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.133022    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:34.629333    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.629333    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.629333    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.629333    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.635358    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.129272    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.129376    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.129376    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.129376    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.136662    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.137446    4712 node_ready.go:49] node "ha-136200-m03" has status "Ready":"True"
	I0501 02:58:35.137492    4712 node_ready.go:38] duration metric: took 11.5157372s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:35.137492    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:35.137635    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:35.137635    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.137635    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.137635    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.149133    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:58:35.158917    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.159445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:58:35.159565    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.159565    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.159651    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.170650    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:35.172026    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.172026    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.172026    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.172026    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:58:35.180770    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.180770    4712 pod_ready.go:81] duration metric: took 21.3241ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:58:35.180770    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.180770    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.185805    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:35.187056    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.187056    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.187056    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.187056    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.191361    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.192405    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.192405    4712 pod_ready.go:81] duration metric: took 11.6358ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:58:35.192405    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.192405    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.192405    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.196117    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.197312    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.197312    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.197389    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.197389    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.201195    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.201924    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.201924    4712 pod_ready.go:81] duration metric: took 9.5188ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.201924    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.202054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:58:35.202195    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.202195    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.202195    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.208450    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.209323    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:35.209323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.209323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.209323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.212935    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.214190    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.214190    4712 pod_ready.go:81] duration metric: took 12.2652ms for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.214190    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.330301    4712 request.go:629] Waited for 115.8713ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.330574    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.330574    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.338021    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.534070    4712 request.go:629] Waited for 194.5208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.534353    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.534353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.540932    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.541927    4712 pod_ready.go:92] pod "etcd-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.541927    4712 pod_ready.go:81] duration metric: took 327.673ms for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.541927    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.737879    4712 request.go:629] Waited for 195.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.738683    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.738683    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.743861    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.940254    4712 request.go:629] Waited for 195.0246ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.940254    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.940254    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.943091    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:35.949355    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.949355    4712 pod_ready.go:81] duration metric: took 407.425ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.949355    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.143537    4712 request.go:629] Waited for 193.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143801    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143835    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.143835    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.143835    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.149992    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.331653    4712 request.go:629] Waited for 180.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.331653    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.331653    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.337290    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.338458    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.338521    4712 pod_ready.go:81] duration metric: took 389.1629ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.338521    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.533514    4712 request.go:629] Waited for 194.8709ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.533967    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.534181    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.534181    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.534181    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.548236    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:58:36.737561    4712 request.go:629] Waited for 188.1304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737864    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737942    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.737942    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.738002    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.742410    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:36.743400    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.743400    4712 pod_ready.go:81] duration metric: took 404.8131ms for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.743400    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.942223    4712 request.go:629] Waited for 198.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.942445    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.942445    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.947749    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.131974    4712 request.go:629] Waited for 183.3149ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132232    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.132323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.132323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.137476    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.138446    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.138446    4712 pod_ready.go:81] duration metric: took 395.044ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.138446    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.333778    4712 request.go:629] Waited for 195.2258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.334044    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.334044    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.338704    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:37.538179    4712 request.go:629] Waited for 197.0874ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538437    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538500    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.538500    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.538500    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.544773    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:37.544773    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.544773    4712 pod_ready.go:81] duration metric: took 406.3235ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.544773    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.743876    4712 request.go:629] Waited for 199.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.744106    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.744106    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.749628    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.931954    4712 request.go:629] Waited for 180.0772ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932132    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.932132    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.932132    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.937302    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.937875    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.937875    4712 pod_ready.go:81] duration metric: took 393.0991ms for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.937875    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.134928    4712 request.go:629] Waited for 196.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.134928    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.135164    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.135164    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.135164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.151320    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:58:38.340422    4712 request.go:629] Waited for 186.7144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.340523    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.340523    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.344835    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:38.346933    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.347124    4712 pod_ready.go:81] duration metric: took 409.2461ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.347124    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.529397    4712 request.go:629] Waited for 182.0139ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529776    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.529776    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.529776    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.535530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.733704    4712 request.go:629] Waited for 197.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.733854    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.733854    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.739456    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.741035    4712 pod_ready.go:92] pod "kube-proxy-9ml9x" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.741035    4712 pod_ready.go:81] duration metric: took 393.9082ms for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.741141    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.936294    4712 request.go:629] Waited for 194.9804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.936492    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.936492    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.941904    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.139076    4712 request.go:629] Waited for 195.5675ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.139516    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.139590    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.146156    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:39.146839    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.147389    4712 pod_ready.go:81] duration metric: took 406.2452ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.147389    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.331771    4712 request.go:629] Waited for 183.3466ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.331839    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.331839    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.338962    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:39.529638    4712 request.go:629] Waited for 189.8551ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.529880    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.529880    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.535423    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.536281    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.536496    4712 pod_ready.go:81] duration metric: took 389.1041ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.536496    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.733532    4712 request.go:629] Waited for 196.8225ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733532    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733755    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.733755    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.733755    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.738768    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.936556    4712 request.go:629] Waited for 196.8464ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.936957    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.937066    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.942275    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.942447    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.943009    4712 pod_ready.go:81] duration metric: took 406.5101ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.943009    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.137743    4712 request.go:629] Waited for 194.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.138045    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.138045    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.143795    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.340161    4712 request.go:629] Waited for 194.6485ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.340368    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.340368    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.346127    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.347243    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:40.347243    4712 pod_ready.go:81] duration metric: took 404.2307ms for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.347243    4712 pod_ready.go:38] duration metric: took 5.2097122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:40.347243    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:58:40.361809    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:58:40.399669    4712 api_server.go:72] duration metric: took 17.4115847s to wait for apiserver process to appear ...
	I0501 02:58:40.399766    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:58:40.399822    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:58:40.410080    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:58:40.410375    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:58:40.410503    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.410503    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.410503    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.412638    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:40.413725    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:58:40.413725    4712 api_server.go:131] duration metric: took 13.9591ms to wait for apiserver health ...
	I0501 02:58:40.413725    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:58:40.543767    4712 request.go:629] Waited for 129.9651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.543975    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.543975    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.554206    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.565423    4712 system_pods.go:59] 24 kube-system pods found
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.566039    4712 system_pods.go:74] duration metric: took 152.3128ms to wait for pod list to return data ...
	I0501 02:58:40.566039    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:58:40.731110    4712 request.go:629] Waited for 164.8435ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.731110    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.731110    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.736937    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.737529    4712 default_sa.go:45] found service account: "default"
	I0501 02:58:40.737568    4712 default_sa.go:55] duration metric: took 171.5277ms for default service account to be created ...
	I0501 02:58:40.737568    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:58:40.936328    4712 request.go:629] Waited for 198.4062ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.936390    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.936390    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.946796    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.961809    4712 system_pods.go:86] 24 kube-system pods found
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.962497    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.962521    4712 system_pods.go:126] duration metric: took 224.9515ms to wait for k8s-apps to be running ...
	I0501 02:58:40.962521    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:58:40.975583    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:58:41.007354    4712 system_svc.go:56] duration metric: took 44.7618ms WaitForService to wait for kubelet
	I0501 02:58:41.007354    4712 kubeadm.go:576] duration metric: took 18.0193266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:58:41.007354    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:58:41.140806    4712 request.go:629] Waited for 133.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140922    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140964    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:41.140964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:41.141046    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:41.151428    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:41.153995    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154053    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154053    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:105] duration metric: took 146.7575ms to run NodePressure ...
	I0501 02:58:41.154113    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:58:41.154113    4712 start.go:254] writing updated cluster config ...
	I0501 02:58:41.168562    4712 ssh_runner.go:195] Run: rm -f paused
	I0501 02:58:41.321592    4712 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:58:41.326673    4712 out.go:177] * Done! kubectl is now configured to use "ha-136200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.482589007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.482784408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.483246110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.676182761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.679018677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.679207678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:50:57 ha-136200 dockerd[1335]: time="2024-05-01T02:50:57.679887882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812342061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812581962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812601063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.813284867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:20 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:59:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c61d49828a30cad795117fa540b839a76d74dc6aaa64f0cc1a3a17e5ca07eff2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 01 02:59:21 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:59:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649291489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649563690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649688091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649852992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb23816e7b6b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Running             busybox                   0                   c61d49828a30c       busybox-fc5497c4f-6mlkh
	229343dc7dba5       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   54bbf0662d422       coredns-7db6d8ff4d-rm4gm
	247f815bf0531       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   aaa3d1f50041e       storage-provisioner
	822aaf8c270e3       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   cadf8314e6ab7       coredns-7db6d8ff4d-2j8mj
	c09511b7df643       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              13 minutes ago      Running             kindnet-cni               0                   bdd01e6cca1ed       kindnet-sj2rc
	562cd55ab9702       a0bf559e280cf                                                                                         14 minutes ago      Running             kube-proxy                0                   579e0dba427c2       kube-proxy-8f67k
	1c063bfe224cd       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     14 minutes ago      Running             kube-vip                  0                   7f28f99b3c8a8       kube-vip-ha-136200
	b6454ceb34cad       259c8277fcbbc                                                                                         14 minutes ago      Running             kube-scheduler            0                   e6cf1f3e651b3       kube-scheduler-ha-136200
	8ff4bf0570939       c42f13656d0b2                                                                                         14 minutes ago      Running             kube-apiserver            0                   2455e947d4906       kube-apiserver-ha-136200
	8fa3aa565b366       c7aad43836fa5                                                                                         14 minutes ago      Running             kube-controller-manager   0                   c7e42fd34e247       kube-controller-manager-ha-136200
	8b0d01885db55       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   da46759fd8e15       etcd-ha-136200
	
	
	==> coredns [229343dc7dba] <==
	[INFO] 10.244.1.2:38893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.138771945s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000276501s
	[INFO] 10.244.1.2:46275 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000672s
	[INFO] 10.244.2.2:34687 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040099987s
	[INFO] 10.244.2.2:56378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000284202s
	[INFO] 10.244.2.2:56092 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000345802s
	[INFO] 10.244.2.2:52745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000349302s
	[INFO] 10.244.2.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095201s
	[INFO] 10.244.0.4:51567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000267102s
	[INFO] 10.244.0.4:33148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178701s
	[INFO] 10.244.1.2:43398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000089301s
	[INFO] 10.244.1.2:52211 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001122s
	[INFO] 10.244.1.2:35470 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013228661s
	[INFO] 10.244.1.2:40781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174701s
	[INFO] 10.244.1.2:45257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000274201s
	[INFO] 10.244.1.2:36114 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165601s
	[INFO] 10.244.2.2:56600 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000371102s
	[INFO] 10.244.2.2:39742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000250502s
	[INFO] 10.244.0.4:45933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116901s
	[INFO] 10.244.0.4:53681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082001s
	[INFO] 10.244.2.2:38830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232701s
	[INFO] 10.244.0.4:51196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001489507s
	[INFO] 10.244.0.4:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264301s
	[INFO] 10.244.0.4:43314 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.013461063s
	[INFO] 10.244.1.2:41778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092301s
	
	
	==> coredns [822aaf8c270e] <==
	[INFO] 10.244.2.2:41813 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217501s
	[INFO] 10.244.2.2:54888 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032885853s
	[INFO] 10.244.0.4:49712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126101s
	[INFO] 10.244.0.4:55974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012564658s
	[INFO] 10.244.0.4:45253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139901s
	[INFO] 10.244.0.4:60045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001515s
	[INFO] 10.244.0.4:39879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175501s
	[INFO] 10.244.0.4:42089 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000310501s
	[INFO] 10.244.1.2:53821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111101s
	[INFO] 10.244.1.2:42651 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116201s
	[INFO] 10.244.2.2:34505 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.2.2:54873 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001606s
	[INFO] 10.244.0.4:60573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001105s
	[INFO] 10.244.0.4:37086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000727s
	[INFO] 10.244.1.2:52370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123901s
	[INFO] 10.244.1.2:35190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277501s
	[INFO] 10.244.1.2:42611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158301s
	[INFO] 10.244.1.2:36993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000280201s
	[INFO] 10.244.2.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206701s
	[INFO] 10.244.2.2:37229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092101s
	[INFO] 10.244.2.2:56027 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001251s
	[INFO] 10.244.0.4:55246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211601s
	[INFO] 10.244.1.2:57784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270801s
	[INFO] 10.244.1.2:39482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001183s
	[INFO] 10.244.1.2:53277 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078801s
	
	
	==> describe nodes <==
	Name:               ha-136200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:04:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.217.218
	  Hostname:    ha-136200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd5a02b3729c454c81fac1ddb77470ea
	  System UUID:                feb48805-7018-ee45-9dd1-70d50cb8dabe
	  Boot ID:                    f931e3ee-8c2d-4859-8d97-8671a4247530
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6mlkh              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 coredns-7db6d8ff4d-2j8mj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-rm4gm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-136200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-sj2rc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-136200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-136200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8f67k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-136200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-136200                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-136200 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  RegisteredNode           6m10s              node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	
	
	Name:               ha-136200-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:54:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.213.142
	  Hostname:    ha-136200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b20b8a63378b4be990a267d65bc5017b
	  System UUID:                f54ef658-ded9-8245-9d86-fec94474eff5
	  Boot ID:                    b6a9b4fd-1abd-4ef4-a3a8-bc0c39ab4624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pc6wt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 etcd-ha-136200-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-kb2x4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-136200-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-136200-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-zj5jv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-136200-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-136200-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  RegisteredNode           10m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-136200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  RegisteredNode           6m10s              node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	
	
	Name:               ha-136200-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 02:59:47 +0000   Wed, 01 May 2024 02:58:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.216.62
	  Hostname:    ha-136200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352997c1e27d48bb8dff5ae5f17e228a
	  System UUID:                0e4a669f-6d5f-be47-a143-5d2db1558741
	  Boot ID:                    8ce378d2-4a7e-40de-aab0-8bc599c3d157
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2gr4g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 etcd-ha-136200-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 kindnet-rlfkk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m33s
	  kube-system                 kube-apiserver-ha-136200-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-controller-manager-ha-136200-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-proxy-9ml9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 kube-scheduler-ha-136200-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-vip-ha-136200-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node ha-136200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x7 over 6m33s)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           6m28s                  node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	
	
	==> dmesg <==
	[  +7.445343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 02:49] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.218573] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.318095] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.121878] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.646066] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.241331] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.276456] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.872310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.245693] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.234055] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.318386] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[May 1 02:50] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.117675] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.894847] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.744854] systemd-fstab-generator[1728]: Ignoring "noauto" option for root device
	[  +0.118239] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.246999] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.464074] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[ +14.473066] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.151247] kauditd_printk_skb: 29 callbacks suppressed
	[May 1 02:54] kauditd_printk_skb: 26 callbacks suppressed
	[May 1 03:02] hrtimer: interrupt took 2691714 ns
	
	
	==> etcd [8b0d01885db5] <==
	{"level":"info","ts":"2024-05-01T02:58:21.563903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5cb0dbd3e937195 switched to configuration voters=(5151751861439785487 15405422056800743829 16720541665161568577)"}
	{"level":"info","ts":"2024-05-01T02:58:21.564037Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"d92207d17d517cdc","local-member-id":"d5cb0dbd3e937195"}
	{"level":"info","ts":"2024-05-01T02:58:21.564065Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"d5cb0dbd3e937195","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"477eb305d8136a0f"}
	{"level":"warn","ts":"2024-05-01T02:58:27.32276Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e80b4c0e2412e141","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"53.82673ms"}
	{"level":"warn","ts":"2024-05-01T02:58:27.322905Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"477eb305d8136a0f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"53.975031ms"}
	{"level":"info","ts":"2024-05-01T02:58:27.32416Z","caller":"traceutil/trace.go:171","msg":"trace[1054755025] linearizableReadLoop","detail":"{readStateIndex:1749; appliedIndex:1750; }","duration":"179.427394ms","start":"2024-05-01T02:58:27.144718Z","end":"2024-05-01T02:58:27.324146Z","steps":["trace[1054755025] 'read index received'  (duration: 179.423494ms)","trace[1054755025] 'applied index is now lower than readState.Index'  (duration: 2.9µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T02:58:27.324463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.798696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-136200-m03\" ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2024-05-01T02:58:27.325782Z","caller":"traceutil/trace.go:171","msg":"trace[1458868258] range","detail":"{range_begin:/registry/minions/ha-136200-m03; range_end:; response_count:1; response_revision:1575; }","duration":"181.205807ms","start":"2024-05-01T02:58:27.144565Z","end":"2024-05-01T02:58:27.325771Z","steps":["trace[1458868258] 'agreement among raft nodes before linearized reading'  (duration: 179.804097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:27.325805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.295259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:58:27.327416Z","caller":"traceutil/trace.go:171","msg":"trace[1620131110] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1575; }","duration":"106.638269ms","start":"2024-05-01T02:58:27.220472Z","end":"2024-05-01T02:58:27.32711Z","steps":["trace[1620131110] 'agreement among raft nodes before linearized reading'  (duration: 105.303859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:28.207615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.283539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.28.217.218\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-01T02:58:28.20815Z","caller":"traceutil/trace.go:171","msg":"trace[526707853] range","detail":"{range_begin:/registry/masterleases/172.28.217.218; range_end:; response_count:1; response_revision:1578; }","duration":"227.827942ms","start":"2024-05-01T02:58:27.980307Z","end":"2024-05-01T02:58:28.208135Z","steps":["trace[526707853] 'range keys from in-memory index tree'  (duration: 226.16143ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:58:33.155687Z","caller":"traceutil/trace.go:171","msg":"trace[822609576] linearizableReadLoop","detail":"{readStateIndex:1773; appliedIndex:1773; }","duration":"127.106614ms","start":"2024-05-01T02:58:33.028561Z","end":"2024-05-01T02:58:33.155667Z","steps":["trace[822609576] 'read index received'  (duration: 127.096113ms)","trace[822609576] 'applied index is now lower than readState.Index'  (duration: 3.201µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T02:58:33.156309Z","caller":"traceutil/trace.go:171","msg":"trace[2144601308] transaction","detail":"{read_only:false; response_revision:1595; number_of_response:1; }","duration":"161.212759ms","start":"2024-05-01T02:58:32.995083Z","end":"2024-05-01T02:58:33.156296Z","steps":["trace[2144601308] 'process raft request'  (duration: 161.011858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:33.156653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.070121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-05-01T02:58:33.156711Z","caller":"traceutil/trace.go:171","msg":"trace[302833371] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1595; }","duration":"128.172822ms","start":"2024-05-01T02:58:33.02853Z","end":"2024-05-01T02:58:33.156702Z","steps":["trace[302833371] 'agreement among raft nodes before linearized reading'  (duration: 127.786619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:33.264542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.338328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ha-136200-m03\" ","response":"range_response_count:1 size:4512"}
	{"level":"info","ts":"2024-05-01T02:58:33.264603Z","caller":"traceutil/trace.go:171","msg":"trace[1479493783] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ha-136200-m03; range_end:; response_count:1; response_revision:1595; }","duration":"101.45723ms","start":"2024-05-01T02:58:33.163133Z","end":"2024-05-01T02:58:33.26459Z","steps":["trace[1479493783] 'agreement among raft nodes before linearized reading'  (duration: 89.079641ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:00:22.770623Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1078}
	{"level":"info","ts":"2024-05-01T03:00:22.882389Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1078,"took":"110.812232ms","hash":3849218282,"current-db-size-bytes":3649536,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-01T03:00:22.882504Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3849218282,"revision":1078,"compact-revision":-1}
	{"level":"info","ts":"2024-05-01T03:01:04.916293Z","caller":"traceutil/trace.go:171","msg":"trace[1983744639] transaction","detail":"{read_only:false; response_revision:2081; number_of_response:1; }","duration":"115.484567ms","start":"2024-05-01T03:01:04.80079Z","end":"2024-05-01T03:01:04.916275Z","steps":["trace[1983744639] 'process raft request'  (duration: 115.357067ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:02:03.35618Z","caller":"traceutil/trace.go:171","msg":"trace[1139546375] linearizableReadLoop","detail":"{readStateIndex:2579; appliedIndex:2579; }","duration":"135.951986ms","start":"2024-05-01T03:02:03.220209Z","end":"2024-05-01T03:02:03.356161Z","steps":["trace[1139546375] 'read index received'  (duration: 135.946186ms)","trace[1139546375] 'applied index is now lower than readState.Index'  (duration: 4.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:02:03.356787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.278387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T03:02:03.356854Z","caller":"traceutil/trace.go:171","msg":"trace[254823889] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2219; }","duration":"136.661889ms","start":"2024-05-01T03:02:03.220181Z","end":"2024-05-01T03:02:03.356843Z","steps":["trace[254823889] 'agreement among raft nodes before linearized reading'  (duration: 136.253587ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:04:47 up 16 min,  0 users,  load average: 0.39, 0.35, 0.28
	Linux ha-136200 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c09511b7df64] <==
	I0501 03:04:02.616348       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:04:12.639288       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:04:12.639369       1 main.go:227] handling current node
	I0501 03:04:12.639383       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:04:12.639391       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:04:12.639548       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:04:12.639645       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:04:22.648581       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:04:22.649053       1 main.go:227] handling current node
	I0501 03:04:22.649148       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:04:22.649348       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:04:22.649748       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:04:22.649765       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:04:32.666909       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:04:32.667151       1 main.go:227] handling current node
	I0501 03:04:32.667187       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:04:32.667197       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:04:32.667625       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:04:32.667665       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:04:42.678748       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:04:42.678849       1 main.go:227] handling current node
	I0501 03:04:42.678868       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:04:42.678877       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:04:42.679515       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:04:42.679602       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ff4bf057093] <==
	Trace[670363995]: [511.709143ms] [511.709143ms] END
	I0501 02:54:22.977601       1 trace.go:236] Trace[1452834138]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,client:172.28.213.142,api-group:storage.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (01-May-2024 02:54:22.472) (total time: 504ms):
	Trace[1452834138]: ["Create etcd3" audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,key:/csinodes/ha-136200-m02,type:*storage.CSINode,resource:csinodes.storage.k8s.io 504ms (02:54:22.473)
	Trace[1452834138]:  ---"Txn call succeeded" 503ms (02:54:22.977)]
	Trace[1452834138]: [504.731076ms] [504.731076ms] END
	E0501 02:58:15.730056       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:58:15.730169       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:58:15.730071       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:58:15.731583       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:58:15.732500       1 timeout.go:142] post-timeout activity - time-elapsed: 2.647619ms, PATCH "/api/v1/namespaces/default/events/ha-136200-m03.17cb3e09c56bb983" result: <nil>
	E0501 02:59:25.456065       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61414: use of closed network connection
	E0501 02:59:26.016855       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61416: use of closed network connection
	E0501 02:59:26.743048       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61418: use of closed network connection
	E0501 02:59:27.423392       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61421: use of closed network connection
	E0501 02:59:28.036056       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61423: use of closed network connection
	E0501 02:59:28.618704       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61425: use of closed network connection
	E0501 02:59:29.166283       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61427: use of closed network connection
	E0501 02:59:29.771114       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61429: use of closed network connection
	E0501 02:59:30.328866       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61431: use of closed network connection
	E0501 02:59:31.360058       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61434: use of closed network connection
	E0501 02:59:41.926438       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61436: use of closed network connection
	E0501 02:59:42.497809       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61439: use of closed network connection
	E0501 02:59:53.089743       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61441: use of closed network connection
	E0501 02:59:53.660135       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61443: use of closed network connection
	E0501 03:00:04.225188       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61445: use of closed network connection
	
	
	==> kube-controller-manager [8fa3aa565b36] <==
	I0501 02:50:56.182254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.9µs"
	I0501 02:50:56.871742       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 02:50:58.734842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.702µs"
	I0501 02:50:58.815553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.110569ms"
	I0501 02:50:58.817069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.005µs"
	I0501 02:50:58.859853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.315916ms"
	I0501 02:50:58.862248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="191.304µs"
	I0501 02:54:21.439127       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m02\" does not exist"
	I0501 02:54:21.501101       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m02" podCIDRs=["10.244.1.0/24"]
	I0501 02:54:21.914883       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m02"
	I0501 02:58:14.901209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m03\" does not exist"
	I0501 02:58:14.933592       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m03" podCIDRs=["10.244.2.0/24"]
	I0501 02:58:16.990389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m03"
	I0501 02:59:18.914466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.158562ms"
	I0501 02:59:19.095324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.785078ms"
	I0501 02:59:19.461767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="365.331283ms"
	I0501 02:59:19.490263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.541695ms"
	I0501 02:59:19.490899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.9µs"
	I0501 02:59:21.446166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0501 02:59:21.996495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.097772ms"
	I0501 02:59:21.997082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.301µs"
	I0501 02:59:22.122170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.415164ms"
	I0501 02:59:22.122332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.3µs"
	I0501 02:59:22.485058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.861489ms"
	I0501 02:59:22.485150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.8µs"
	
	
	==> kube-proxy [562cd55ab970] <==
	I0501 02:50:44.069527       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:50:44.111745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.217.218"]
	I0501 02:50:44.171562       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:50:44.171703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:50:44.171730       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:50:44.178320       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:50:44.180232       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:50:44.180271       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:50:44.184544       1 config.go:192] "Starting service config controller"
	I0501 02:50:44.185913       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:50:44.186319       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:50:44.186555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:50:44.189915       1 config.go:319] "Starting node config controller"
	I0501 02:50:44.190110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:50:44.287624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:50:44.287761       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:50:44.290292       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6454ceb34ca] <==
	W0501 02:50:26.797411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:50:26.797624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:50:26.830216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.830267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:26.925545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:50:26.925605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:50:26.948130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.948245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.027771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:27.028119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.045542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:50:27.045577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:50:27.049002       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:50:27.049031       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:50:30.148462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 02:59:18.858485       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pc6wt" node="ha-136200-m03"
	E0501 02:59:18.859227       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" pod="default/busybox-fc5497c4f-pc6wt"
	E0501 02:59:18.932248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.932355       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10f52d20-5605-40b5-8875-ceb0cb5c2e53(default/busybox-fc5497c4f-6mlkh) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6mlkh"
	E0501 02:59:18.932383       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" pod="default/busybox-fc5497c4f-6mlkh"
	I0501 02:59:18.932412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.934021       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	E0501 02:59:18.934194       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b6febdff-c378-4d33-94ae-8b321071f921(default/busybox-fc5497c4f-2gr4g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-2gr4g"
	E0501 02:59:18.934386       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" pod="default/busybox-fc5497c4f-2gr4g"
	I0501 02:59:18.937753       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	
	
	==> kubelet <==
	May 01 03:00:29 ha-136200 kubelet[2230]: E0501 03:00:29.313413    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:00:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:00:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:00:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:00:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:01:29 ha-136200 kubelet[2230]: E0501 03:01:29.309664    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:01:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:01:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:01:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:01:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:02:29 ha-136200 kubelet[2230]: E0501 03:02:29.306486    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:02:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:02:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:02:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:02:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:03:29 ha-136200 kubelet[2230]: E0501 03:03:29.307664    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:03:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:03:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:03:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:03:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:04:29 ha-136200 kubelet[2230]: E0501 03:04:29.306136    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:04:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:04:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:04:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:04:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:04:39.362995    6492 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200
E0501 03:05:01.200526   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200: (12.473475s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-136200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (262.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (84.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-136200 status --output json -v=7 --alsologtostderr: exit status 2 (49.1848752s)

                                                
                                                
-- stdout --
	[{"Name":"ha-136200","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-136200-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-136200-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-136200-m04","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:05:31.931719    8688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:05:32.024953    8688 out.go:291] Setting OutFile to fd 548 ...
	I0501 03:05:32.025470    8688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:05:32.025470    8688 out.go:304] Setting ErrFile to fd 924...
	I0501 03:05:32.025470    8688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:05:32.042212    8688 out.go:298] Setting JSON to true
	I0501 03:05:32.042302    8688 mustload.go:65] Loading cluster: ha-136200
	I0501 03:05:32.042302    8688 notify.go:220] Checking for updates...
	I0501 03:05:32.043078    8688 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:05:32.043078    8688 status.go:255] checking status of ha-136200 ...
	I0501 03:05:32.043902    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:05:34.235050    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:34.235192    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:34.235192    8688 status.go:330] ha-136200 host status = "Running" (err=<nil>)
	I0501 03:05:34.235192    8688 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:05:34.235985    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:05:36.425895    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:36.425895    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:36.426974    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:05:39.084047    8688 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:05:39.084121    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:39.084187    8688 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:05:39.098432    8688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:05:39.098432    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:05:41.260919    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:41.260919    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:41.261039    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:05:43.899110    8688 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:05:43.899920    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:43.900120    8688 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 03:05:44.007705    8688 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9092365s)
	I0501 03:05:44.022510    8688 ssh_runner.go:195] Run: systemctl --version
	I0501 03:05:44.050316    8688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:05:44.083412    8688 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:05:44.083492    8688 api_server.go:166] Checking apiserver status ...
	I0501 03:05:44.096420    8688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:05:44.142084    8688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup
	W0501 03:05:44.164061    8688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:05:44.177887    8688 ssh_runner.go:195] Run: ls
	I0501 03:05:44.186057    8688 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:05:44.197907    8688 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:05:44.197907    8688 status.go:422] ha-136200 apiserver status = Running (err=<nil>)
	I0501 03:05:44.198225    8688 status.go:257] ha-136200 status: &{Name:ha-136200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:05:44.198293    8688 status.go:255] checking status of ha-136200-m02 ...
	I0501 03:05:44.198895    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:05:46.428561    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:46.428561    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:46.428674    8688 status.go:330] ha-136200-m02 host status = "Running" (err=<nil>)
	I0501 03:05:46.428674    8688 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:05:46.429786    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:05:48.688983    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:48.688983    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:48.689577    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:05:51.345732    8688 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 03:05:51.345732    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:51.345732    8688 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:05:51.365674    8688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:05:51.366722    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:05:53.610172    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:53.610172    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:53.610479    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:05:56.214460    8688 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 03:05:56.214731    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:56.214731    8688 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:05:56.320016    8688 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.953309s)
	I0501 03:05:56.334023    8688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:05:56.363639    8688 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:05:56.363639    8688 api_server.go:166] Checking apiserver status ...
	I0501 03:05:56.376329    8688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:05:56.422843    8688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2203/cgroup
	W0501 03:05:56.441291    8688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:05:56.455856    8688 ssh_runner.go:195] Run: ls
	I0501 03:05:56.464596    8688 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:05:56.472007    8688 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:05:56.472306    8688 status.go:422] ha-136200-m02 apiserver status = Running (err=<nil>)
	I0501 03:05:56.472306    8688 status.go:257] ha-136200-m02 status: &{Name:ha-136200-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:05:56.472306    8688 status.go:255] checking status of ha-136200-m03 ...
	I0501 03:05:56.472914    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:05:58.620368    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:05:58.621181    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:05:58.621181    8688 status.go:330] ha-136200-m03 host status = "Running" (err=<nil>)
	I0501 03:05:58.621181    8688 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:05:58.622003    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:06:00.827054    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:06:00.827054    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:00.827054    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:06:03.453997    8688 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:06:03.453997    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:03.454508    8688 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:06:03.470236    8688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:06:03.470236    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:06:05.666426    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:06:05.666484    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:05.666484    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:06:08.361613    8688 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:06:08.362667    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:08.363163    8688 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 03:06:08.469507    8688 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.999234s)
	I0501 03:06:08.484782    8688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:06:08.552226    8688 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:06:08.552327    8688 api_server.go:166] Checking apiserver status ...
	I0501 03:06:08.570695    8688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:06:08.621213    8688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup
	W0501 03:06:08.644775    8688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:06:08.664846    8688 ssh_runner.go:195] Run: ls
	I0501 03:06:08.677026    8688 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:06:08.691623    8688 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:06:08.691623    8688 status.go:422] ha-136200-m03 apiserver status = Running (err=<nil>)
	I0501 03:06:08.691623    8688 status.go:257] ha-136200-m03 status: &{Name:ha-136200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:06:08.691623    8688 status.go:255] checking status of ha-136200-m04 ...
	I0501 03:06:08.692568    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:06:10.949712    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:06:10.950615    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:10.950615    8688 status.go:330] ha-136200-m04 host status = "Running" (err=<nil>)
	I0501 03:06:10.950615    8688 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:06:10.951252    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:06:13.222799    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:06:13.223132    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:13.223271    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:06:15.872643    8688 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:06:15.874280    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:15.874280    8688 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:06:15.889000    8688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:06:15.889000    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:06:18.077504    8688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:06:18.077504    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:18.077504    8688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:06:20.806460    8688 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:06:20.806460    8688 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:06:20.807683    8688 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:06:20.915397    8688 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.026312s)
	I0501 03:06:20.929564    8688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:06:20.958033    8688 status.go:257] ha-136200-m04 status: &{Name:ha-136200-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-136200 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200: (12.4263575s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 logs -n 25
E0501 03:06:34.973493   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 logs -n 25: (8.8764902s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-869300 image build -t     | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	|         | localhost/my-image:functional-869300 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-869300 image ls           | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	| delete  | -p functional-869300                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:46 UTC | 01 May 24 02:47 UTC |
	| start   | -p ha-136200 --wait=true             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:47 UTC | 01 May 24 02:58 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- apply -f             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- rollout status       | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-2gr4g -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-6mlkh -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-pc6wt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| node    | add -p ha-136200 -v=7                | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:00 UTC |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:47:19
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:47:19.308853    4712 out.go:291] Setting OutFile to fd 968 ...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.308853    4712 out.go:304] Setting ErrFile to fd 940...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.335053    4712 out.go:298] Setting JSON to false
	I0501 02:47:19.338050    4712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104693,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:47:19.338050    4712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:47:19.343676    4712 out.go:177] * [ha-136200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:47:19.347056    4712 notify.go:220] Checking for updates...
	I0501 02:47:19.349570    4712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:47:19.352627    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:47:19.356010    4712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:47:19.359527    4712 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:47:19.364982    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:47:19.368342    4712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:47:24.771909    4712 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:47:24.777503    4712 start.go:297] selected driver: hyperv
	I0501 02:47:24.777503    4712 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:47:24.777503    4712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:47:24.830749    4712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:47:24.832155    4712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:47:24.832679    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:47:24.832679    4712 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:47:24.832679    4712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:47:24.832944    4712 start.go:340] cluster config:
	{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:47:24.832944    4712 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:47:24.837439    4712 out.go:177] * Starting "ha-136200" primary control-plane node in "ha-136200" cluster
	I0501 02:47:24.839631    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:47:24.839631    4712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:47:24.839631    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:47:24.840411    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:47:24.840411    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:47:24.841147    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:47:24.841147    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json: {Name:mk622c10e63d8ff69d285ce96c3e57bfbed6a54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:360] acquireMachinesLock for ha-136200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200"
	I0501 02:47:24.843334    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:47:24.843334    4712 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:47:24.845982    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:47:24.845982    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:47:24.845982    4712 client.go:168] LocalClient.Create starting
	I0501 02:47:24.847217    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.847705    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:47:24.848012    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.848048    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.848190    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:47:27.058462    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:47:27.058678    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:27.058786    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:30.441172    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:34.074968    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:34.075096    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:34.077782    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:47:34.612276    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: Creating VM...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:37.663991    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:37.664390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:37.664515    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:47:37.664599    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:39.498071    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:39.498234    4712 main.go:141] libmachine: Creating VHD
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:47:43.230384    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2B9E163F-083E-4714-9C44-9A52BE438E53
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:47:43.231369    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:43.231468    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:47:43.231550    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:47:43.241482    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -SizeBytes 20000MB
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:48.971981    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:47:52.766292    4712 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-136200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:47:52.766504    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:52.766592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200 -DynamicMemoryEnabled $false
	I0501 02:47:54.972628    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200 -Count 2
	I0501 02:47:57.167635    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\boot2docker.iso'
	I0501 02:47:59.728585    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd'
	I0501 02:48:02.387014    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: Starting VM...
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:07.690543    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:11.244005    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:13.447532    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:17.014251    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:19.231015    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:22.791035    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:24.970362    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:24.970583    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:24.970826    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:27.538082    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:27.539108    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:28.540044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:30.692065    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:33.315400    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:35.454723    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:48:35.454940    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:37.590850    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:37.591294    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:37.591378    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:40.152942    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:40.153017    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:40.158939    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:40.170076    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:40.170076    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:48:40.311850    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:48:40.311938    4712 buildroot.go:166] provisioning hostname "ha-136200"
	I0501 02:48:40.312011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:42.388241    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:44.941487    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:44.942306    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:44.948681    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:44.949642    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:44.949718    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200 && echo "ha-136200" | sudo tee /etc/hostname
	I0501 02:48:45.123416    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200
	
	I0501 02:48:45.123490    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:47.248892    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:49.920164    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:49.920164    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:49.920749    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:48:50.089597    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:48:50.089597    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:48:50.089597    4712 buildroot.go:174] setting up certificates
	I0501 02:48:50.090153    4712 provision.go:84] configureAuth start
	I0501 02:48:50.090240    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:54.811881    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:59.487351    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:59.487402    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:59.487402    4712 provision.go:143] copyHostCerts
	I0501 02:48:59.487402    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:48:59.487402    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:48:59.487402    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:48:59.488365    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:48:59.489448    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:48:59.489632    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:48:59.490981    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:48:59.491187    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:48:59.492726    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200 san=[127.0.0.1 172.28.217.218 ha-136200 localhost minikube]
	I0501 02:48:59.577887    4712 provision.go:177] copyRemoteCerts
	I0501 02:48:59.596375    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:48:59.597286    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:01.699540    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:04.259427    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:04.371852    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7744315s)
	I0501 02:49:04.371852    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:49:04.371852    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:49:04.422302    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:49:04.422602    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:49:04.478176    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:49:04.478176    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:49:04.532091    4712 provision.go:87] duration metric: took 14.4416362s to configureAuth
	I0501 02:49:04.532091    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:49:04.532690    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:49:04.532690    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:06.624197    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:09.238280    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:09.238979    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:09.245381    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:09.246060    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:09.246060    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:49:09.397759    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:49:09.397835    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:49:09.398290    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:49:09.398464    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:11.514582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:14.057033    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:14.057033    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:14.057589    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:49:14.242724    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:49:14.242724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:16.392749    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:18.993701    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:18.994302    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:19.000048    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:19.000537    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:19.000616    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:49:21.256124    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:49:21.256675    4712 machine.go:97] duration metric: took 45.8016127s to provisionDockerMachine
	I0501 02:49:21.256675    4712 client.go:171] duration metric: took 1m56.4098314s to LocalClient.Create
	I0501 02:49:21.256737    4712 start.go:167] duration metric: took 1m56.4098939s to libmachine.API.Create "ha-136200"
	I0501 02:49:21.256791    4712 start.go:293] postStartSetup for "ha-136200" (driver="hyperv")
	I0501 02:49:21.256828    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:49:21.271031    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:49:21.271031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:23.374454    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:23.374634    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:23.374716    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:25.919441    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:26.030251    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.759185s)
	I0501 02:49:26.044496    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:49:26.053026    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:49:26.053160    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:49:26.053600    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:49:26.054397    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:49:26.054397    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:49:26.070942    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:49:26.091568    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:49:26.143252    4712 start.go:296] duration metric: took 4.8863885s for postStartSetup
	I0501 02:49:26.147080    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:30.792456    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:49:30.796310    4712 start.go:128] duration metric: took 2m5.952044s to createHost
	I0501 02:49:30.796483    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:32.880619    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:35.468747    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:35.469470    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:35.469470    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531775.614259884
	
	I0501 02:49:35.611947    4712 fix.go:216] guest clock: 1714531775.614259884
	I0501 02:49:35.611947    4712 fix.go:229] Guest: 2024-05-01 02:49:35.614259884 +0000 UTC Remote: 2024-05-01 02:49:30.7963907 +0000 UTC m=+131.677772001 (delta=4.817869184s)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:40.253738    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:40.254896    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:40.261655    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:40.262498    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:40.262498    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531775
	I0501 02:49:40.415406    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:49:35 UTC 2024
	
	I0501 02:49:40.415406    4712 fix.go:236] clock set: Wed May  1 02:49:35 UTC 2024
	 (err=<nil>)
	I0501 02:49:40.415406    4712 start.go:83] releasing machines lock for "ha-136200", held for 2m15.5712031s
	I0501 02:49:40.416105    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:42.459145    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:45.033478    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:45.034063    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:45.038366    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:49:45.038515    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:45.050350    4712 ssh_runner.go:195] Run: cat /version.json
	I0501 02:49:45.050350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.230427    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:47.254252    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.254469    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.254558    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:49.922691    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.922867    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.923261    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.951021    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:50.022867    4712 ssh_runner.go:235] Completed: cat /version.json: (4.9724804s)
	I0501 02:49:50.037446    4712 ssh_runner.go:195] Run: systemctl --version
	I0501 02:49:50.123463    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0850592s)
	I0501 02:49:50.137756    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:49:50.147834    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:49:50.164262    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:49:50.197825    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:49:50.197877    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.197877    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:50.246918    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:49:50.281929    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:49:50.303725    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:49:50.317480    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:49:50.354607    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.392927    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:49:50.426684    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.464924    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:49:50.501540    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:49:50.541276    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:49:50.576278    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:49:50.614209    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:49:50.653144    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:49:50.688395    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:50.921067    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:49:50.960389    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.974435    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:49:51.020319    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.063731    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:49:51.113242    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.154151    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.196182    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:49:51.267621    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.297018    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:51.359019    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:49:51.382845    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:49:51.408532    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:49:51.459482    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:49:51.703156    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:49:51.928842    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:49:51.928842    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:49:51.985157    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:52.205484    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:49:54.768628    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5631253s)
	I0501 02:49:54.782717    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:49:54.821909    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:54.861989    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:49:55.097455    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:49:55.325878    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.547674    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:49:55.604800    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:55.648909    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.873886    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:49:55.987252    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:49:56.000254    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:49:56.009412    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:49:56.021925    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:49:56.041055    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:49:56.111426    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:49:56.124879    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.172644    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.210144    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:49:56.210144    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:49:56.231590    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:49:56.237056    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:49:56.273064    4712 kubeadm.go:877] updating cluster {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:49:56.273064    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:49:56.283976    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:49:56.305563    4712 docker.go:685] Got preloaded images: 
	I0501 02:49:56.305585    4712 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:49:56.319781    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:49:56.352980    4712 ssh_runner.go:195] Run: which lz4
	I0501 02:49:56.361434    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:49:56.376111    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:49:56.383203    4712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:49:56.383203    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:49:58.545920    4712 docker.go:649] duration metric: took 2.1838816s to copy over tarball
	I0501 02:49:58.559153    4712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:50:07.024882    4712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4656661s)
	I0501 02:50:07.024882    4712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:50:07.091273    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:50:07.117701    4712 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:50:07.169927    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:07.413870    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:50:10.777827    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.363932s)
	I0501 02:50:10.787955    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:50:10.813130    4712 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:50:10.813237    4712 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:50:10.813237    4712 kubeadm.go:928] updating node { 172.28.217.218 8443 v1.30.0 docker true true} ...
	I0501 02:50:10.813471    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.217.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:50:10.824528    4712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:50:10.865306    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:10.865306    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:10.865306    4712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:50:10.865306    4712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.217.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-136200 NodeName:ha-136200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.217.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.217.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:50:10.866013    4712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.217.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-136200"
	  kubeletExtraArgs:
	    node-ip: 172.28.217.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.217.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:50:10.866164    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:50:10.879856    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:50:10.916330    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:50:10.916590    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:50:10.930144    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:50:10.946847    4712 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:50:10.960617    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:50:10.980126    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 02:50:11.015010    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:50:11.046356    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:50:11.090122    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:50:11.151082    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:50:11.158193    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:50:11.198290    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:11.421704    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:50:11.457294    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.217.218
	I0501 02:50:11.457383    4712 certs.go:194] generating shared ca certs ...
	I0501 02:50:11.457383    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.458373    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:50:11.458865    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:50:11.459136    4712 certs.go:256] generating profile certs ...
	I0501 02:50:11.459821    4712 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:50:11.459950    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt with IP's: []
	I0501 02:50:11.600094    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt ...
	I0501 02:50:11.600094    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt: {Name:mkd5e4d205a603f84158daca3df4537a47f4507f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.601407    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key ...
	I0501 02:50:11.601407    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key: {Name:mk0f41aeab078751e43122e83e5a087fadc50acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.602800    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6
	I0501 02:50:11.602800    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.223.254]
	I0501 02:50:11.735634    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 ...
	I0501 02:50:11.735634    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6: {Name:mk25daf93f931731761fc26133f1d14b1615ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.736662    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 ...
	I0501 02:50:11.736662    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6: {Name:mk2e8ec633a20ca6bf6f004cdd1aa2dc02923e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.738036    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:50:11.750002    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:50:11.751999    4712 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:50:11.751999    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt with IP's: []
	I0501 02:50:11.858892    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt ...
	I0501 02:50:11.858892    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt: {Name:mk545c7bac57fe0475c68dabf35cf7726f7ba6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.860058    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key ...
	I0501 02:50:11.860058    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key: {Name:mk197c02f3ddea53477a395060c41fac8b486d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.861502    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:50:11.862042    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:50:11.862321    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:50:11.872340    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:50:11.872340    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:50:11.873220    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:50:11.874220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:50:11.875212    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:11.877013    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:50:11.928037    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:50:11.975033    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:50:12.024768    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:50:12.069813    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:50:12.117563    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:50:12.166940    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:50:12.214744    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:50:12.264780    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:50:12.314494    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:50:12.357210    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:50:12.407402    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:50:12.460345    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:50:12.486641    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:50:12.524534    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.531940    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.545887    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.569538    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:50:12.603111    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:50:12.640545    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.648489    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.664745    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.689236    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:50:12.722220    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:50:12.763152    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.771274    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.785811    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.809601    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:50:12.843815    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:50:12.851182    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:50:12.851596    4712 kubeadm.go:391] StartCluster: {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:50:12.861439    4712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:50:12.897822    4712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:50:12.930863    4712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:50:12.967142    4712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:50:12.989079    4712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:50:12.989174    4712 kubeadm.go:156] found existing configuration files:
	
	I0501 02:50:13.002144    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:50:13.022983    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:50:13.037263    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:50:13.070061    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:50:13.088170    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:50:13.104788    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:50:13.142331    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.161295    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:50:13.176372    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.217242    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:50:13.236623    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:50:13.250242    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:50:13.273719    4712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:50:13.796086    4712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:50:29.771938    4712 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:50:29.772562    4712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:50:29.772731    4712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:50:29.772731    4712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:50:29.775841    4712 out.go:204]   - Generating certificates and keys ...
	I0501 02:50:29.775841    4712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:50:29.776550    4712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:50:29.776704    4712 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:50:29.776918    4712 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:50:29.777081    4712 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:50:29.777841    4712 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.778067    4712 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:50:29.778150    4712 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:50:29.778250    4712 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:50:29.778341    4712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:50:29.778421    4712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:50:29.778724    4712 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:50:29.778804    4712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:50:29.778987    4712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:50:29.779083    4712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:50:29.779174    4712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:50:29.779418    4712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:50:29.781433    4712 out.go:204]   - Booting up control plane ...
	I0501 02:50:29.781433    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:50:29.781986    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:50:29.782154    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:50:29.782509    4712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:50:29.782778    4712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:50:29.782833    4712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:50:29.783188    4712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:50:29.783366    4712 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:50:29.783611    4712 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.012148578s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] The API server is healthy after 9.161500426s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:50:29.784343    4712 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:50:29.784449    4712 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:50:29.784907    4712 kubeadm.go:309] [mark-control-plane] Marking the node ha-136200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:50:29.785014    4712 kubeadm.go:309] [bootstrap-token] Using token: bebbcj.jj3pub0bsd9tcu95
	I0501 02:50:29.789897    4712 out.go:204]   - Configuring RBAC rules ...
	I0501 02:50:29.789897    4712 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:50:29.791324    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:50:29.791589    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:50:29.791711    4712 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:50:29.791958    4712 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.791958    4712 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:50:29.793244    4712 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:50:29.793244    4712 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:50:29.793868    4712 kubeadm.go:309] 
	I0501 02:50:29.794174    4712 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:50:29.794395    4712 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:50:29.794428    4712 kubeadm.go:309] 
	I0501 02:50:29.794531    4712 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--control-plane 
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.795522    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:50:29.795582    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:29.795642    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:29.798321    4712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:50:29.815739    4712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:50:29.823882    4712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:50:29.823882    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:50:29.880076    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:50:30.703674    4712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200 minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=true
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:30.736553    4712 ops.go:34] apiserver oom_adj: -16
	I0501 02:50:30.914646    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.422356    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.924569    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.422489    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.916374    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.419951    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.922300    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.426730    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.915815    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.415601    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.917473    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.419572    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.923752    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.424859    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.926096    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.415957    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.915894    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.417286    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.917110    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.418538    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.919363    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.420336    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.914423    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:42.068730    4712 kubeadm.go:1107] duration metric: took 11.364941s to wait for elevateKubeSystemPrivileges
	W0501 02:50:42.068870    4712 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:50:42.068934    4712 kubeadm.go:393] duration metric: took 29.2171223s to StartCluster
	I0501 02:50:42.069035    4712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.069065    4712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:42.070934    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.072021    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:50:42.072021    4712 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:42.072021    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:50:42.072021    4712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:50:42.072021    4712 addons.go:69] Setting storage-provisioner=true in profile "ha-136200"
	I0501 02:50:42.072578    4712 addons.go:234] Setting addon storage-provisioner=true in "ha-136200"
	I0501 02:50:42.072715    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:42.072765    4712 addons.go:69] Setting default-storageclass=true in profile "ha-136200"
	I0501 02:50:42.072820    4712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-136200"
	I0501 02:50:42.073003    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:42.073773    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.074332    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.237653    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:50:42.682536    4712 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.325924    4712 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:50:44.323327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.325924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.328653    4712 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:44.328653    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:50:44.328653    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:44.329300    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:44.330211    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:50:44.331266    4712 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:50:44.331692    4712 addons.go:234] Setting addon default-storageclass=true in "ha-136200"
	I0501 02:50:44.331692    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:44.332839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.573962    4712 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:46.573962    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:50:46.573962    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.693061    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.693131    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.693256    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:48.834701    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:49.381777    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:49.540602    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:51.475208    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:51.629340    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:51.811276    4712 round_trippers.go:463] GET https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:50:51.811902    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.811902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.811902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.826458    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:50:51.827458    4712 round_trippers.go:463] PUT https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:50:51.827458    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Content-Type: application/json
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.827458    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.831221    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:50:51.834740    4712 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:50:51.838052    4712 addons.go:505] duration metric: took 9.7659586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:50:51.838052    4712 start.go:245] waiting for cluster config update ...
	I0501 02:50:51.838052    4712 start.go:254] writing updated cluster config ...
	I0501 02:50:51.841165    4712 out.go:177] 
	I0501 02:50:51.854479    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:51.854479    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.861940    4712 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 02:50:51.865640    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:50:51.865640    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:50:51.865640    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:50:51.866174    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:50:51.866462    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.868358    4712 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:50:51.868358    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 02:50:51.869005    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:51.869070    4712 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 02:50:51.871919    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:50:51.872184    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:50:51.872184    4712 client.go:168] LocalClient.Create starting
	I0501 02:50:51.872730    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:53.846893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:57.208630    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:00.949038    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:51:01.496342    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: Creating VM...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:05.182621    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:51:05.182621    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:51:07.021151    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:51:07.022208    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:07.022208    4712 main.go:141] libmachine: Creating VHD
	I0501 02:51:07.022261    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:51:10.800515    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5C7D5B1-6A19-4B92-8073-0E024A878A77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:51:10.800843    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:51:10.813657    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:14.013713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -SizeBytes 20000MB
	I0501 02:51:16.613734    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:16.613973    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:16.614122    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m02 -DynamicMemoryEnabled $false
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:22.596839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m02 -Count 2
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\boot2docker.iso'
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:27.310044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd'
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: Starting VM...
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:35.347158    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:37.880551    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:37.881580    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:38.889792    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:41.091533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:43.621201    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:43.621813    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:44.622350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:50.423751    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:52.634051    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:56.229253    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:58.425395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:01.089224    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:03.247035    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:03.247253    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:03.247291    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:52:03.247449    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:05.493082    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:08.085777    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:08.101463    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:08.101463    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:52:08.244557    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:52:08.244557    4712 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 02:52:08.244557    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:10.396068    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:12.975111    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:12.975111    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:12.975111    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 02:52:13.142328    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 02:52:13.142479    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:17.993085    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:17.993267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:18.000242    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:18.000687    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:18.000687    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:52:18.164116    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:52:18.164116    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:52:18.164235    4712 buildroot.go:174] setting up certificates
	I0501 02:52:18.164235    4712 provision.go:84] configureAuth start
	I0501 02:52:18.164235    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:20.323803    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:20.324816    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:20.324954    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:25.037258    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:25.038231    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:25.038262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:27.637529    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:27.638462    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:27.638462    4712 provision.go:143] copyHostCerts
	I0501 02:52:27.638661    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:52:27.638979    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:52:27.639093    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:52:27.639613    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:52:27.640827    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:52:27.641053    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:52:27.642372    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:52:27.642643    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:52:27.642762    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:52:27.643264    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:52:27.644242    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.213.142 ha-136200-m02 localhost minikube]
	I0501 02:52:27.843189    4712 provision.go:177] copyRemoteCerts
	I0501 02:52:27.855361    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:52:27.855361    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:29.953607    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:32.549323    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:32.549356    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:32.549913    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:32.667202    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8118058s)
	I0501 02:52:32.667353    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:52:32.667882    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:52:32.721631    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:52:32.721631    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:52:32.771533    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:52:32.772177    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:52:32.825532    4712 provision.go:87] duration metric: took 14.6610374s to configureAuth
	I0501 02:52:32.825532    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:52:32.826094    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:52:32.826229    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:34.944371    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:37.500533    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:37.500590    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:37.506891    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:37.507395    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:37.507476    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:52:37.655757    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:52:37.655757    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:52:37.655757    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:52:37.656297    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:39.803012    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:42.365445    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:42.366335    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:42.372086    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:42.372086    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:42.372086    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:52:42.560633    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:52:42.560698    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:44.724351    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:47.350624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:47.350694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:47.356560    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:47.356887    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:47.356887    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:52:49.658910    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:52:49.658910    4712 machine.go:97] duration metric: took 46.4112065s to provisionDockerMachine
	I0501 02:52:49.659442    4712 client.go:171] duration metric: took 1m57.7858628s to LocalClient.Create
	I0501 02:52:49.659442    4712 start.go:167] duration metric: took 1m57.786395s to libmachine.API.Create "ha-136200"
	I0501 02:52:49.659503    4712 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 02:52:49.659537    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:52:49.675636    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:52:49.675636    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:51.837386    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:54.474409    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:54.475041    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:54.475353    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:54.588525    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9128536s)
	I0501 02:52:54.605879    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:52:54.614578    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:52:54.614578    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:52:54.615019    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:52:54.615983    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:52:54.616061    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:52:54.630716    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:52:54.652380    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:52:54.707179    4712 start.go:296] duration metric: took 5.0475618s for postStartSetup
	I0501 02:52:54.709671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:56.858662    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:59.468337    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:59.468783    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:59.468965    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:52:59.470910    4712 start.go:128] duration metric: took 2m7.6009059s to createHost
	I0501 02:52:59.471488    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:01.642528    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:04.224906    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:04.225471    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:04.225635    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:53:04.373720    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531984.377348123
	
	I0501 02:53:04.373720    4712 fix.go:216] guest clock: 1714531984.377348123
	I0501 02:53:04.373720    4712 fix.go:229] Guest: 2024-05-01 02:53:04.377348123 +0000 UTC Remote: 2024-05-01 02:52:59.4709109 +0000 UTC m=+340.350757801 (delta=4.906437223s)
	I0501 02:53:04.373851    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:06.540324    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:09.211685    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:09.212163    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:09.212163    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531984
	I0501 02:53:09.386381    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:53:04 UTC 2024
	
	I0501 02:53:09.386381    4712 fix.go:236] clock set: Wed May  1 02:53:04 UTC 2024
	 (err=<nil>)
	I0501 02:53:09.386381    4712 start.go:83] releasing machines lock for "ha-136200-m02", held for 2m17.5170158s
	I0501 02:53:09.386381    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:11.545938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:14.175393    4712 out.go:177] * Found network options:
	I0501 02:53:14.178428    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.181305    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.183961    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.186016    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:53:14.186987    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.190185    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:53:14.190185    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:14.201210    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:53:14.201210    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:16.404382    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:19.202467    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.202936    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.203019    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.238045    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.238494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.238494    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.303673    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1023631s)
	W0501 02:53:19.303730    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:53:19.322303    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:53:19.425813    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.234512s)
	I0501 02:53:19.425813    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:53:19.425869    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:19.426179    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:19.480110    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:53:19.516304    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:53:19.540429    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:53:19.554725    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:53:19.592793    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.638122    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:53:19.676636    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.716798    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:53:19.755079    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:53:19.792962    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:53:19.828507    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:53:19.864630    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:53:19.900003    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:53:19.933687    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:20.164043    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:53:20.200981    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:20.214486    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:53:20.252522    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.291404    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:53:20.342446    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.384719    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.433485    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:53:20.493558    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.521863    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:20.572266    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:53:20.592650    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:53:20.612894    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:53:20.662972    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:53:20.893661    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:53:21.103995    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:53:21.104092    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:53:21.153897    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:21.367769    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:53:23.926290    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5584356s)
	I0501 02:53:23.942886    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:53:23.985733    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.029327    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:53:24.262777    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:53:24.474412    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:24.701708    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:53:24.747995    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.789968    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:25.013627    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:53:25.132301    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:53:25.147412    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:53:25.161719    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:53:25.177972    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:53:25.198441    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:53:25.257309    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:53:25.270183    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.317675    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.366446    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:53:25.369267    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:53:25.371205    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:53:25.380319    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:53:25.380407    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:53:25.393209    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:53:25.400057    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:25.423648    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:53:25.424611    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:53:25.425544    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:27.528822    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:27.530295    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.213.142
	I0501 02:53:27.530371    4712 certs.go:194] generating shared ca certs ...
	I0501 02:53:27.530371    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.531276    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:53:27.531739    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:53:27.531846    4712 certs.go:256] generating profile certs ...
	I0501 02:53:27.532594    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:53:27.532748    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12
	I0501 02:53:27.532985    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.223.254]
	I0501 02:53:27.709722    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 ...
	I0501 02:53:27.709722    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12: {Name:mk2a82749362965014fb3e2d8d662f7a4e7e9cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.711740    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 ...
	I0501 02:53:27.711740    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12: {Name:mkb73c4ed44f1dd1b8f82d46a1302578ac6f367b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.712120    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:53:27.726267    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:53:27.727349    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:53:27.728383    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:53:27.728582    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:53:27.728825    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:53:27.729015    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:53:27.729253    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:53:27.729653    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:53:27.729899    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:53:27.730538    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:53:27.730538    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:53:27.730927    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:53:27.731437    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:53:27.731866    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:53:27.732310    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:53:27.732905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:27.733131    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:53:27.733384    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:53:27.733671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:29.906678    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:32.470407    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:32.580880    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:53:32.588963    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:53:32.624993    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:53:32.635801    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:53:32.670832    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:53:32.678812    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:53:32.713791    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:53:32.721308    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:53:32.760244    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:53:32.767565    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:53:32.804387    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:53:32.811905    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:53:32.832394    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:53:32.885891    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:53:32.936137    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:53:32.994824    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:53:33.054042    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:53:33.105998    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:53:33.156026    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:53:33.205426    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:53:33.264385    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:53:33.316776    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:53:33.368293    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:53:33.420965    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:53:33.458001    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:53:33.499072    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:53:33.534603    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:53:33.570373    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:53:33.602430    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:53:33.635495    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:53:33.684802    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:53:33.709070    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:53:33.743711    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.750970    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.765746    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.787709    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:53:33.828429    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:53:33.866546    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.874255    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.888580    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.910501    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:53:33.948720    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:53:33.993042    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.001989    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.015762    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.040058    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:53:34.077501    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:53:34.086036    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:53:34.086573    4712 kubeadm.go:928] updating node {m02 172.28.213.142 8443 v1.30.0 docker true true} ...
	I0501 02:53:34.086726    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.213.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:53:34.086726    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:53:34.101653    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:53:34.130866    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:53:34.131029    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:53:34.145238    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.165400    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:53:34.180369    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0501 02:53:35.468257    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.481254    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.488247    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:53:35.489247    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:53:35.546630    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.559624    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.626048    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:53:35.627145    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:53:36.028150    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:53:36.077312    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.090870    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.109939    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:53:36.111871    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:53:36.821139    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:53:36.843821    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:53:36.878070    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:53:36.917707    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:53:36.971960    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:53:36.979482    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:37.020702    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:37.250249    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:53:37.282989    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:37.299000    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:53:37.299000    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:53:37.299000    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:39.433070    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:42.012855    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:42.240815    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9416996s)
	I0501 02:53:42.240889    4712 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:53:42.240889    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443"
	I0501 02:54:27.807891    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443": (45.5666728s)
	I0501 02:54:27.808014    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:54:28.660694    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m02 minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:54:28.861404    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:54:29.035785    4712 start.go:318] duration metric: took 51.7364106s to joinCluster
	I0501 02:54:29.035979    4712 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:29.038999    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:54:29.036818    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:29.055991    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:54:29.482004    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:54:29.532870    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:54:29.534181    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:54:29.534386    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:54:29.535958    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:29.536236    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:29.536236    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:29.536236    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:29.536353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:29.609745    4712 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0501 02:54:30.045557    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.045557    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.045557    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.045557    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.051535    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:30.542020    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.542083    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.542148    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.542148    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.549047    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.050630    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.050694    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.050694    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.050694    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.063209    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:31.542025    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.542098    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.542098    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.542098    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.548667    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.549663    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:32.050097    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.050097    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.050174    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.050174    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.054568    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:32.542017    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.542017    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.542017    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.542017    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.546488    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:33.050866    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.050866    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.050866    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.050866    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.056451    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:33.538033    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.538033    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.538033    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.538033    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.713541    4712 round_trippers.go:574] Response Status: 200 OK in 175 milliseconds
	I0501 02:54:33.714615    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:34.041226    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.041226    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.041226    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.041226    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.047903    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:34.547454    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.547454    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.547757    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.547757    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.552099    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.046877    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.046877    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.046877    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.046877    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.052278    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:35.548463    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.548463    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.548740    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.548740    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.558660    4712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:54:35.560213    4712 node_ready.go:49] node "ha-136200-m02" has status "Ready":"True"
	I0501 02:54:35.560213    4712 node_ready.go:38] duration metric: took 6.0241453s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:35.560332    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:35.560422    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:35.560422    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.560422    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.560422    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.572050    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:54:35.581777    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.581924    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:54:35.581924    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.581924    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.581924    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.585770    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:35.587608    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.587685    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.587685    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.587685    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.591867    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.591867    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.591867    4712 pod_ready.go:81] duration metric: took 10.0903ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:54:35.591867    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.591867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.591867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.596249    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.597880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.597964    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.597964    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.597964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.602327    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.603007    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.603007    4712 pod_ready.go:81] duration metric: took 11.1397ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.603007    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.604166    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:54:35.604211    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.604211    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.604211    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.610508    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:35.611831    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.611831    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.611831    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.611831    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.627921    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:54:35.629498    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.629498    4712 pod_ready.go:81] duration metric: took 26.4906ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:35.629498    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.629498    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.629498    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.638393    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:35.638911    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.638911    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.638911    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.639550    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.643473    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:36.140037    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.140167    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.140167    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.140167    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.148123    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:36.149580    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.149580    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.149659    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.149659    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.155530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:36.644340    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.644340    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.644340    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.644340    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.651321    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:36.652588    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.653128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.653128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.653128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.660377    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.144534    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.144656    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.144656    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.144656    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.150598    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:37.152092    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.152665    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.152665    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.152665    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.160441    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.644104    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.644239    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.644239    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.644239    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.649836    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.650604    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.650671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.650671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.650671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.654947    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.656164    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:38.142505    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.142505    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.142505    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.142505    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.149100    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.151258    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.151347    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.151347    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.151347    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.155677    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:38.643186    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.643241    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.643241    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.643241    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.650578    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.651873    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.651902    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.651902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.651902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.655946    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.142681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.142681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.142681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.142681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.148315    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:39.149953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.150203    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.150203    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.150203    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.154771    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.643364    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.643364    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.643364    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.643364    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.649703    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:39.650947    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.650947    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.651009    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.651009    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.654949    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:39.656190    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:40.142428    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.142428    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.142676    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.142676    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.148562    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.149792    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.149792    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.149792    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.149792    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.154808    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.644095    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.644095    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.644095    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.644095    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.650441    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:40.651544    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.651598    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.651598    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.651598    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.662172    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:41.143094    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.143187    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.143187    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.143187    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.148870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.150018    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.150018    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.150018    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.150018    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.156810    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:41.640508    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.640624    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.640624    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.640624    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.646018    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.646730    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.647318    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.647318    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.647318    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.652880    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.139900    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.139985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.139985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.139985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.145577    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.146383    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.146383    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.146448    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.146448    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.151141    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.151862    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:42.639271    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.639271    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.639271    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.639271    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.642318    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.646671    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.646671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.646671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.646671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.651360    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.137151    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.137496    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.137496    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.137496    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.141750    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.142959    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.142959    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.142959    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.142959    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.147560    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.641950    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.641985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.641985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.641985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.647599    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.649299    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.649350    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.649350    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.649350    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.657034    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:43.658043    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.658043    4712 pod_ready.go:81] duration metric: took 8.0284866s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:54:43.658043    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.658043    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.658043    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.664394    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.664394    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.664394    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.668848    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.669848    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.669848    4712 pod_ready.go:81] duration metric: took 11.805ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:54:43.669848    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.669848    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.670830    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.674754    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:43.676699    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.676699    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.676699    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.676699    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.681632    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.683231    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.683231    4712 pod_ready.go:81] duration metric: took 13.3825ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683231    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:54:43.683412    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.683412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.683412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.688589    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.690255    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.690255    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.690325    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.690325    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.695853    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.696818    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.696860    4712 pod_ready.go:81] duration metric: took 13.6296ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696912    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696993    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:54:43.697029    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.697029    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.697029    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.701912    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.703032    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.703736    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.703736    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.703736    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.706383    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:54:43.707734    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.707824    4712 pod_ready.go:81] duration metric: took 10.9115ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.707824    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.845210    4712 request.go:629] Waited for 137.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.845681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.845681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.851000    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.047046    4712 request.go:629] Waited for 194.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.047200    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.047200    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.052548    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.053735    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.053735    4712 pod_ready.go:81] duration metric: took 345.9086ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.053735    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.250128    4712 request.go:629] Waited for 196.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.250128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.250128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.254761    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:44.456435    4712 request.go:629] Waited for 200.6839ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.456435    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.456435    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.461480    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.462518    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.462578    4712 pod_ready.go:81] duration metric: took 408.7057ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.462578    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.648779    4712 request.go:629] Waited for 185.8104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.648953    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.649128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.654457    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.855621    4712 request.go:629] Waited for 199.4812ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.855706    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.855706    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.861147    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.861147    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.861699    4712 pod_ready.go:81] duration metric: took 399.1179ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.861778    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.042766    4712 request.go:629] Waited for 180.9309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.042766    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.042766    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.047379    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:45.244553    4712 request.go:629] Waited for 197.0101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.244553    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.244553    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.250870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:45.252485    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:45.252485    4712 pod_ready.go:81] duration metric: took 390.7033ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.252547    4712 pod_ready.go:38] duration metric: took 9.6921442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:45.252619    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:54:45.266607    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:45.298538    4712 api_server.go:72] duration metric: took 16.2624407s to wait for apiserver process to appear ...
	I0501 02:54:45.298538    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:54:45.298642    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:54:45.308804    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:54:45.308804    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:54:45.308804    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.308804    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.308804    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.310764    4712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:54:45.311165    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:54:45.311238    4712 api_server.go:131] duration metric: took 12.7003ms to wait for apiserver health ...
	I0501 02:54:45.311238    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:54:45.446869    4712 request.go:629] Waited for 135.3903ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.446869    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.446869    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.455463    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:45.466055    4712 system_pods.go:59] 17 kube-system pods found
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.466055    4712 system_pods.go:74] duration metric: took 154.8157ms to wait for pod list to return data ...
	I0501 02:54:45.466055    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:54:45.650374    4712 request.go:629] Waited for 183.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.650566    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.650566    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.661544    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:45.662734    4712 default_sa.go:45] found service account: "default"
	I0501 02:54:45.662869    4712 default_sa.go:55] duration metric: took 196.812ms for default service account to be created ...
	I0501 02:54:45.662869    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:54:45.853192    4712 request.go:629] Waited for 189.9269ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.853419    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.853419    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.865601    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:45.872777    4712 system_pods.go:86] 17 kube-system pods found
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.873383    4712 system_pods.go:126] duration metric: took 210.5126ms to wait for k8s-apps to be running ...
	I0501 02:54:45.873383    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:54:45.886040    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:45.914966    4712 system_svc.go:56] duration metric: took 41.5829ms WaitForService to wait for kubelet
	I0501 02:54:45.915054    4712 kubeadm.go:576] duration metric: took 16.8789526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:54:45.915054    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:54:46.043164    4712 request.go:629] Waited for 127.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:46.043164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:46.043310    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:46.050320    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:105] duration metric: took 136.4457ms to run NodePressure ...
	I0501 02:54:46.051501    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:54:46.051501    4712 start.go:254] writing updated cluster config ...
	I0501 02:54:46.055981    4712 out.go:177] 
	I0501 02:54:46.073210    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:46.073681    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.079155    4712 out.go:177] * Starting "ha-136200-m03" control-plane node in "ha-136200" cluster
	I0501 02:54:46.082550    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:54:46.082550    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:54:46.083028    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:54:46.083223    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:54:46.083384    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.091748    4712 start.go:360] acquireMachinesLock for ha-136200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:54:46.091748    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m03"
	I0501 02:54:46.091748    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:46.091748    4712 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0501 02:54:46.099863    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:54:46.100178    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:54:46.100178    4712 client.go:168] LocalClient.Create starting
	I0501 02:54:46.100178    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101128    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:54:49.970242    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:51.553966    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:55.358013    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:54:55.879042    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: Creating VM...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:58.933270    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:54:58.933728    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:55:00.789675    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:00.789938    4712 main.go:141] libmachine: Creating VHD
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:55:04.583967    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AAB86B48-3D75-4842-8FF8-3BDEC4AB86C2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:55:04.584134    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:55:04.594277    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -SizeBytes 20000MB
	I0501 02:55:10.391210    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:10.391245    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:10.391352    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:14.151882    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m03 -DynamicMemoryEnabled $false
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:16.478022    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m03 -Count 2
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:18.717850    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\boot2docker.iso'
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd'
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:24.025533    4712 main.go:141] libmachine: Starting VM...
	I0501 02:55:24.025533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m03
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:27.131722    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:55:27.131722    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:29.452098    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:29.453035    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:29.453089    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:32.025441    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:32.026234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:33.036612    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:37.849230    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:37.849355    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:38.854379    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:44.621333    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:49.475402    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:49.476316    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:50.480573    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:52.724713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:55.379189    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:57.536246    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:55:57.536246    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:59.681292    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:59.681842    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:59.682021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:02.302435    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:02.303031    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:02.303031    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:56:02.440858    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:56:02.440919    4712 buildroot.go:166] provisioning hostname "ha-136200-m03"
	I0501 02:56:02.440919    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:04.541126    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:07.118513    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:07.119097    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:07.119097    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m03 && echo "ha-136200-m03" | sudo tee /etc/hostname
	I0501 02:56:07.274395    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m03
	
	I0501 02:56:07.274395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:09.427222    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:12.066151    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:12.066558    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:12.072701    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:12.073263    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:12.073263    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:56:12.226572    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:56:12.226572    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:56:12.226572    4712 buildroot.go:174] setting up certificates
	I0501 02:56:12.226572    4712 provision.go:84] configureAuth start
	I0501 02:56:12.226572    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:14.383697    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:14.383832    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:14.383916    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:17.017056    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:19.246383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:21.887343    4712 provision.go:143] copyHostCerts
	I0501 02:56:21.887688    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:56:21.887688    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:56:21.887688    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:56:21.888470    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:56:21.889606    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:56:21.890069    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:56:21.890132    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:56:21.890553    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:56:21.891611    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:56:21.891800    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:56:21.891800    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:56:21.892337    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:56:21.893162    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m03 san=[127.0.0.1 172.28.216.62 ha-136200-m03 localhost minikube]
	I0501 02:56:21.973101    4712 provision.go:177] copyRemoteCerts
	I0501 02:56:21.993116    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:56:21.993116    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:24.170031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:26.830749    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:26.831099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:26.831162    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:26.935784    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9426327s)
	I0501 02:56:26.935784    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:56:26.936266    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:56:26.985792    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:56:26.986191    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:56:27.035460    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:56:27.036450    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:56:27.092775    4712 provision.go:87] duration metric: took 14.8660953s to configureAuth
	I0501 02:56:27.092775    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:56:27.093873    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:56:27.094011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:29.214442    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:31.848020    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:31.848124    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:31.859047    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:31.859047    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:31.859047    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:56:31.983811    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:56:31.983936    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:56:31.984160    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:56:31.984160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:34.146837    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:36.793925    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:36.794747    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:36.801153    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:36.801782    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:36.801782    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	Environment="NO_PROXY=172.28.217.218,172.28.213.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:56:36.960579    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	Environment=NO_PROXY=172.28.217.218,172.28.213.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:56:36.960579    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:39.141800    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:41.765565    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:41.766216    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:41.774239    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:41.774411    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:41.774411    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:56:43.994230    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:56:43.994313    4712 machine.go:97] duration metric: took 46.4577313s to provisionDockerMachine
	I0501 02:56:43.994313    4712 client.go:171] duration metric: took 1m57.8932783s to LocalClient.Create
	I0501 02:56:43.994313    4712 start.go:167] duration metric: took 1m57.8932783s to libmachine.API.Create "ha-136200"
	I0501 02:56:43.994428    4712 start.go:293] postStartSetup for "ha-136200-m03" (driver="hyperv")
	I0501 02:56:43.994473    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:56:44.010383    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:56:44.010383    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:46.225048    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:46.225772    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:46.225844    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:48.919679    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:49.032380    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0219067s)
	I0501 02:56:49.045700    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:56:49.054180    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:56:49.054180    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:56:49.054700    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:56:49.055002    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:56:49.055574    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:56:49.071048    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:56:49.092423    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:56:49.143151    4712 start.go:296] duration metric: took 5.1486851s for postStartSetup
	I0501 02:56:49.146034    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:51.349851    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:51.350067    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:51.350153    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:54.016657    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:54.017149    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:54.017380    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:56:54.019460    4712 start.go:128] duration metric: took 2m7.9267809s to createHost
	I0501 02:56:54.019460    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:58.818618    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:58.819348    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:58.819348    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:56:58.949732    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532218.937413126
	
	I0501 02:56:58.949732    4712 fix.go:216] guest clock: 1714532218.937413126
	I0501 02:56:58.949732    4712 fix.go:229] Guest: 2024-05-01 02:56:58.937413126 +0000 UTC Remote: 2024-05-01 02:56:54.0194605 +0000 UTC m=+574.897601601 (delta=4.917952626s)
	I0501 02:56:58.949941    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:01.096436    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:03.657161    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:57:03.657803    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:57:03.657803    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532218
	I0501 02:57:03.807080    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:56:58 UTC 2024
	
	I0501 02:57:03.807174    4712 fix.go:236] clock set: Wed May  1 02:56:58 UTC 2024
	 (err=<nil>)
	I0501 02:57:03.807174    4712 start.go:83] releasing machines lock for "ha-136200-m03", held for 2m17.7144231s
	I0501 02:57:03.807438    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:08.605250    4712 out.go:177] * Found network options:
	I0501 02:57:08.607292    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.612307    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.619160    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:57:08.619160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:08.631565    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:57:08.631565    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:10.838360    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838735    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.624235    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.648439    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.648490    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.648768    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.732596    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1009937s)
	W0501 02:57:13.732596    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:57:13.748662    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:57:13.811529    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:57:13.811529    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:13.811529    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1923313s)
	I0501 02:57:13.812665    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:13.867675    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:57:13.906069    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:57:13.929632    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:57:13.947027    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:57:13.986248    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.024920    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:57:14.061978    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.099821    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:57:14.138543    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:57:14.181270    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:57:14.217808    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:57:14.261794    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:57:14.297051    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:57:14.332222    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:14.558529    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:57:14.595594    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:14.610122    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:57:14.650440    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.689246    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:57:14.740013    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.780524    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.822987    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:57:14.889904    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.919061    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:14.983590    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:57:15.008856    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:57:15.032703    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:57:15.086991    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:57:15.324922    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:57:15.542551    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:57:15.542551    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:57:15.594658    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:15.826063    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:57:18.399291    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5732092s)
	I0501 02:57:18.412657    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:57:18.452282    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:18.491033    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:57:18.702768    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:57:18.928695    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.145438    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:57:19.199070    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:19.242280    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.475811    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:57:19.598548    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:57:19.612590    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:57:19.624279    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:57:19.637235    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:57:19.657683    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:57:19.721351    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:57:19.734095    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.784976    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.822576    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:57:19.826041    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:57:19.827741    4712 out.go:177]   - env NO_PROXY=172.28.217.218,172.28.213.142
	I0501 02:57:19.831635    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:57:19.851676    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:57:19.858242    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:19.883254    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:57:19.883656    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:57:19.884140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:22.018331    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:22.018592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:22.018658    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:22.019393    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.216.62
	I0501 02:57:22.019393    4712 certs.go:194] generating shared ca certs ...
	I0501 02:57:22.019393    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.020318    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:57:22.020786    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:57:22.021028    4712 certs.go:256] generating profile certs ...
	I0501 02:57:22.021028    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:57:22.021606    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9
	I0501 02:57:22.021767    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.216.62 172.28.223.254]
	I0501 02:57:22.149544    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 ...
	I0501 02:57:22.149544    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9: {Name:mk4837fbdb29e34df2c44991c600cda784a93d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.150373    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 ...
	I0501 02:57:22.150373    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9: {Name:mkcff5432d26e17c25cf2a9709eb4553a31e99c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.152472    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:57:22.165924    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:57:22.166444    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:57:22.166444    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:57:22.167623    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:57:22.168122    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:57:22.168280    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:57:22.169490    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:57:22.169490    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:57:22.170595    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:57:22.170869    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:57:22.171164    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:57:22.171434    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:57:22.171670    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:57:22.172911    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:24.374904    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:26.980450    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:27.093857    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:57:27.102183    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:57:27.141690    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:57:27.150194    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:57:27.193806    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:57:27.202957    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:57:27.254044    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:57:27.262605    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:57:27.303214    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:57:27.310453    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:57:27.348966    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:57:27.356382    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:57:27.383468    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:57:27.437872    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:57:27.494095    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:57:27.544977    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:57:27.599083    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:57:27.652123    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:57:27.710792    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:57:27.766379    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:57:27.817284    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:57:27.867949    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:57:27.930560    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:57:27.987875    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:57:28.025174    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:57:28.061492    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:57:28.099323    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:57:28.133169    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:57:28.168585    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:57:28.223450    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:57:28.292690    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:57:28.315882    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:57:28.353000    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.365096    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.379858    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.406814    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:57:28.445706    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:57:28.482484    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.491120    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.507367    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.535421    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:57:28.574647    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:57:28.616757    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.624484    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.642485    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.665148    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:57:28.706630    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:57:28.714508    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:57:28.714998    4712 kubeadm.go:928] updating node {m03 172.28.216.62 8443 v1.30.0 docker true true} ...
	I0501 02:57:28.715189    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.216.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:57:28.715218    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:57:28.727524    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:57:28.767475    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:57:28.767631    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:57:28.783398    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.801741    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:57:28.815792    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:57:28.838101    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:57:28.838226    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:57:28.838396    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.855124    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:57:28.856182    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.858128    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.881905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.881905    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:57:28.882027    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:57:28.882165    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:57:28.882277    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:57:28.898781    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.959439    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:57:28.959688    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:57:30.251192    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:57:30.272192    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:57:30.311119    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:57:30.353248    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:57:30.407414    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:57:30.415360    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:30.454450    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:30.696464    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:57:30.737201    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:30.801844    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:57:30.802126    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:57:30.802234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:32.962279    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:35.601356    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:35.838006    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0358438s)
	I0501 02:57:35.838006    4712 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:57:35.838006    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443"
	I0501 02:58:21.819619    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443": (45.981274s)
	I0501 02:58:21.819619    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:58:22.593318    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m03 minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:58:22.788566    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:58:22.987611    4712 start.go:318] duration metric: took 52.1853822s to joinCluster
	I0501 02:58:22.987895    4712 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:58:23.012496    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:58:22.988142    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:58:23.031751    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:58:23.569395    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:58:23.619961    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:58:23.620228    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:58:23.620770    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:58:23.621670    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:23.621910    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:23.621910    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:23.621982    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:23.621982    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:23.637444    4712 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:58:24.133658    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.133658    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.133658    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.133658    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.139465    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:24.622867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.622867    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.622867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.622867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.629524    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.129429    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.129429    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.129510    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.129510    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.135754    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.633954    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.633954    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.633954    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.633954    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.638650    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:25.639656    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:26.123894    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.123894    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.123894    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.123894    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.129103    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:26.629161    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.629161    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.629161    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.629161    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.648167    4712 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 02:58:27.136028    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.136028    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.136028    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.136028    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.326021    4712 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0501 02:58:27.623480    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.623600    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.623600    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.623600    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.629035    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:28.136433    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.136433    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.136626    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.136626    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.203923    4712 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0501 02:58:28.205553    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:28.636021    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.636185    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.636185    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.636185    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.646735    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:29.122451    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.122515    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.122515    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.122515    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.140826    4712 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:58:29.629756    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.629756    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.629756    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.629756    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.637588    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:30.132174    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.132269    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.132269    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.132269    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.136921    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:30.632939    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.633022    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.633022    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.633022    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.638815    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:30.640044    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:31.133378    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.133378    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.133378    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.133378    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.138754    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:31.633444    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.633511    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.633511    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.633511    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.639686    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:32.131317    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.131317    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.131317    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.131317    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.136414    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:32.629649    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.629649    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.629649    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.629649    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.634980    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:33.129878    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.129878    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.129878    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.129878    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.155125    4712 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0501 02:58:33.156557    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:33.629865    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.630060    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.630060    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.630060    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.636368    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:34.128412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.128412    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.128412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.128412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.133022    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:34.629333    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.629333    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.629333    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.629333    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.635358    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.129272    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.129376    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.129376    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.129376    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.136662    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.137446    4712 node_ready.go:49] node "ha-136200-m03" has status "Ready":"True"
	I0501 02:58:35.137492    4712 node_ready.go:38] duration metric: took 11.5157372s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:35.137492    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:35.137635    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:35.137635    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.137635    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.137635    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.149133    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:58:35.158917    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.159445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:58:35.159565    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.159565    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.159651    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.170650    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:35.172026    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.172026    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.172026    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.172026    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:58:35.180770    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.180770    4712 pod_ready.go:81] duration metric: took 21.3241ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:58:35.180770    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.180770    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.185805    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:35.187056    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.187056    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.187056    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.187056    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.191361    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.192405    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.192405    4712 pod_ready.go:81] duration metric: took 11.6358ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:58:35.192405    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.192405    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.192405    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.196117    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.197312    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.197312    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.197389    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.197389    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.201195    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.201924    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.201924    4712 pod_ready.go:81] duration metric: took 9.5188ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.201924    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.202054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:58:35.202195    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.202195    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.202195    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.208450    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.209323    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:35.209323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.209323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.209323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.212935    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.214190    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.214190    4712 pod_ready.go:81] duration metric: took 12.2652ms for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.214190    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.330301    4712 request.go:629] Waited for 115.8713ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.330574    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.330574    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.338021    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.534070    4712 request.go:629] Waited for 194.5208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.534353    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.534353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.540932    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.541927    4712 pod_ready.go:92] pod "etcd-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.541927    4712 pod_ready.go:81] duration metric: took 327.673ms for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.541927    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.737879    4712 request.go:629] Waited for 195.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.738683    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.738683    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.743861    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.940254    4712 request.go:629] Waited for 195.0246ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.940254    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.940254    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.943091    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:35.949355    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.949355    4712 pod_ready.go:81] duration metric: took 407.425ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.949355    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.143537    4712 request.go:629] Waited for 193.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143801    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143835    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.143835    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.143835    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.149992    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.331653    4712 request.go:629] Waited for 180.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.331653    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.331653    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.337290    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.338458    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.338521    4712 pod_ready.go:81] duration metric: took 389.1629ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.338521    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.533514    4712 request.go:629] Waited for 194.8709ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.533967    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.534181    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.534181    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.534181    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.548236    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:58:36.737561    4712 request.go:629] Waited for 188.1304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737864    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737942    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.737942    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.738002    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.742410    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:36.743400    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.743400    4712 pod_ready.go:81] duration metric: took 404.8131ms for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.743400    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.942223    4712 request.go:629] Waited for 198.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.942445    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.942445    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.947749    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.131974    4712 request.go:629] Waited for 183.3149ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132232    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.132323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.132323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.137476    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.138446    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.138446    4712 pod_ready.go:81] duration metric: took 395.044ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.138446    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.333778    4712 request.go:629] Waited for 195.2258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.334044    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.334044    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.338704    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:37.538179    4712 request.go:629] Waited for 197.0874ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538437    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538500    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.538500    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.538500    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.544773    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:37.544773    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.544773    4712 pod_ready.go:81] duration metric: took 406.3235ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.544773    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.743876    4712 request.go:629] Waited for 199.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.744106    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.744106    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.749628    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.931954    4712 request.go:629] Waited for 180.0772ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932132    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.932132    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.932132    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.937302    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.937875    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.937875    4712 pod_ready.go:81] duration metric: took 393.0991ms for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.937875    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.134928    4712 request.go:629] Waited for 196.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.134928    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.135164    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.135164    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.135164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.151320    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:58:38.340422    4712 request.go:629] Waited for 186.7144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.340523    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.340523    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.344835    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:38.346933    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.347124    4712 pod_ready.go:81] duration metric: took 409.2461ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.347124    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.529397    4712 request.go:629] Waited for 182.0139ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529776    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.529776    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.529776    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.535530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.733704    4712 request.go:629] Waited for 197.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.733854    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.733854    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.739456    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.741035    4712 pod_ready.go:92] pod "kube-proxy-9ml9x" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.741035    4712 pod_ready.go:81] duration metric: took 393.9082ms for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.741141    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.936294    4712 request.go:629] Waited for 194.9804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.936492    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.936492    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.941904    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.139076    4712 request.go:629] Waited for 195.5675ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.139516    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.139590    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.146156    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:39.146839    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.147389    4712 pod_ready.go:81] duration metric: took 406.2452ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.147389    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.331771    4712 request.go:629] Waited for 183.3466ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.331839    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.331839    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.338962    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:39.529638    4712 request.go:629] Waited for 189.8551ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.529880    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.529880    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.535423    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.536281    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.536496    4712 pod_ready.go:81] duration metric: took 389.1041ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.536496    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.733532    4712 request.go:629] Waited for 196.8225ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733532    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733755    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.733755    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.733755    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.738768    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.936556    4712 request.go:629] Waited for 196.8464ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.936957    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.937066    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.942275    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.942447    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.943009    4712 pod_ready.go:81] duration metric: took 406.5101ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.943009    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.137743    4712 request.go:629] Waited for 194.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.138045    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.138045    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.143795    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.340161    4712 request.go:629] Waited for 194.6485ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.340368    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.340368    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.346127    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.347243    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:40.347243    4712 pod_ready.go:81] duration metric: took 404.2307ms for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.347243    4712 pod_ready.go:38] duration metric: took 5.2097122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:40.347243    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:58:40.361809    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:58:40.399669    4712 api_server.go:72] duration metric: took 17.4115847s to wait for apiserver process to appear ...
	I0501 02:58:40.399766    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:58:40.399822    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:58:40.410080    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:58:40.410375    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:58:40.410503    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.410503    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.410503    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.412638    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:40.413725    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:58:40.413725    4712 api_server.go:131] duration metric: took 13.9591ms to wait for apiserver health ...
	I0501 02:58:40.413725    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:58:40.543767    4712 request.go:629] Waited for 129.9651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.543975    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.543975    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.554206    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.565423    4712 system_pods.go:59] 24 kube-system pods found
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.566039    4712 system_pods.go:74] duration metric: took 152.3128ms to wait for pod list to return data ...
	I0501 02:58:40.566039    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:58:40.731110    4712 request.go:629] Waited for 164.8435ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.731110    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.731110    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.736937    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.737529    4712 default_sa.go:45] found service account: "default"
	I0501 02:58:40.737568    4712 default_sa.go:55] duration metric: took 171.5277ms for default service account to be created ...
	I0501 02:58:40.737568    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:58:40.936328    4712 request.go:629] Waited for 198.4062ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.936390    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.936390    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.946796    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.961809    4712 system_pods.go:86] 24 kube-system pods found
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.962497    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.962521    4712 system_pods.go:126] duration metric: took 224.9515ms to wait for k8s-apps to be running ...
	I0501 02:58:40.962521    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:58:40.975583    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:58:41.007354    4712 system_svc.go:56] duration metric: took 44.7618ms WaitForService to wait for kubelet
	I0501 02:58:41.007354    4712 kubeadm.go:576] duration metric: took 18.0193266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:58:41.007354    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:58:41.140806    4712 request.go:629] Waited for 133.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140922    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140964    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:41.140964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:41.141046    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:41.151428    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:41.153995    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154053    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154053    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:105] duration metric: took 146.7575ms to run NodePressure ...
	I0501 02:58:41.154113    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:58:41.154113    4712 start.go:254] writing updated cluster config ...
	I0501 02:58:41.168562    4712 ssh_runner.go:195] Run: rm -f paused
	I0501 02:58:41.321592    4712 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:58:41.326673    4712 out.go:177] * Done! kubectl is now configured to use "ha-136200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812581962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.812601063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:19 ha-136200 dockerd[1335]: time="2024-05-01T02:59:19.813284867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:20 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:59:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c61d49828a30cad795117fa540b839a76d74dc6aaa64f0cc1a3a17e5ca07eff2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 01 02:59:21 ha-136200 cri-dockerd[1232]: time="2024-05-01T02:59:21Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649291489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649563690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649688091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649852992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb23816e7b6b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 minutes ago       Running             busybox                   0                   c61d49828a30c       busybox-fc5497c4f-6mlkh
	229343dc7dba5       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   54bbf0662d422       coredns-7db6d8ff4d-rm4gm
	247f815bf0531       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   aaa3d1f50041e       storage-provisioner
	822aaf8c270e3       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   cadf8314e6ab7       coredns-7db6d8ff4d-2j8mj
	c09511b7df643       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              15 minutes ago      Running             kindnet-cni               0                   bdd01e6cca1ed       kindnet-sj2rc
	562cd55ab9702       a0bf559e280cf                                                                                         15 minutes ago      Running             kube-proxy                0                   579e0dba427c2       kube-proxy-8f67k
	1c063bfe224cd       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     16 minutes ago      Running             kube-vip                  0                   7f28f99b3c8a8       kube-vip-ha-136200
	b6454ceb34cad       259c8277fcbbc                                                                                         16 minutes ago      Running             kube-scheduler            0                   e6cf1f3e651b3       kube-scheduler-ha-136200
	8ff4bf0570939       c42f13656d0b2                                                                                         16 minutes ago      Running             kube-apiserver            0                   2455e947d4906       kube-apiserver-ha-136200
	8fa3aa565b366       c7aad43836fa5                                                                                         16 minutes ago      Running             kube-controller-manager   0                   c7e42fd34e247       kube-controller-manager-ha-136200
	8b0d01885db55       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   da46759fd8e15       etcd-ha-136200
	
	
	==> coredns [229343dc7dba] <==
	[INFO] 10.244.1.2:38893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.138771945s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000276501s
	[INFO] 10.244.1.2:46275 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000672s
	[INFO] 10.244.2.2:34687 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040099987s
	[INFO] 10.244.2.2:56378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000284202s
	[INFO] 10.244.2.2:56092 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000345802s
	[INFO] 10.244.2.2:52745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000349302s
	[INFO] 10.244.2.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095201s
	[INFO] 10.244.0.4:51567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000267102s
	[INFO] 10.244.0.4:33148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178701s
	[INFO] 10.244.1.2:43398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000089301s
	[INFO] 10.244.1.2:52211 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001122s
	[INFO] 10.244.1.2:35470 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013228661s
	[INFO] 10.244.1.2:40781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174701s
	[INFO] 10.244.1.2:45257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000274201s
	[INFO] 10.244.1.2:36114 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165601s
	[INFO] 10.244.2.2:56600 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000371102s
	[INFO] 10.244.2.2:39742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000250502s
	[INFO] 10.244.0.4:45933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116901s
	[INFO] 10.244.0.4:53681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082001s
	[INFO] 10.244.2.2:38830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232701s
	[INFO] 10.244.0.4:51196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001489507s
	[INFO] 10.244.0.4:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264301s
	[INFO] 10.244.0.4:43314 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.013461063s
	[INFO] 10.244.1.2:41778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092301s
	
	
	==> coredns [822aaf8c270e] <==
	[INFO] 10.244.2.2:41813 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217501s
	[INFO] 10.244.2.2:54888 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032885853s
	[INFO] 10.244.0.4:49712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126101s
	[INFO] 10.244.0.4:55974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012564658s
	[INFO] 10.244.0.4:45253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139901s
	[INFO] 10.244.0.4:60045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001515s
	[INFO] 10.244.0.4:39879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175501s
	[INFO] 10.244.0.4:42089 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000310501s
	[INFO] 10.244.1.2:53821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111101s
	[INFO] 10.244.1.2:42651 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116201s
	[INFO] 10.244.2.2:34505 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.2.2:54873 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001606s
	[INFO] 10.244.0.4:60573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001105s
	[INFO] 10.244.0.4:37086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000727s
	[INFO] 10.244.1.2:52370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123901s
	[INFO] 10.244.1.2:35190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277501s
	[INFO] 10.244.1.2:42611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158301s
	[INFO] 10.244.1.2:36993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000280201s
	[INFO] 10.244.2.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206701s
	[INFO] 10.244.2.2:37229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092101s
	[INFO] 10.244.2.2:56027 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001251s
	[INFO] 10.244.0.4:55246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211601s
	[INFO] 10.244.1.2:57784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270801s
	[INFO] 10.244.1.2:39482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001183s
	[INFO] 10.244.1.2:53277 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078801s
	
	
	==> describe nodes <==
	Name:               ha-136200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:06:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.217.218
	  Hostname:    ha-136200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd5a02b3729c454c81fac1ddb77470ea
	  System UUID:                feb48805-7018-ee45-9dd1-70d50cb8dabe
	  Boot ID:                    f931e3ee-8c2d-4859-8d97-8671a4247530
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6mlkh              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 coredns-7db6d8ff4d-2j8mj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-rm4gm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-136200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-sj2rc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-136200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-136200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-8f67k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-136200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-136200                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-136200 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  RegisteredNode           8m4s               node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	
	
	Name:               ha-136200-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:54:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:06:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 02:54:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.213.142
	  Hostname:    ha-136200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b20b8a63378b4be990a267d65bc5017b
	  System UUID:                f54ef658-ded9-8245-9d86-fec94474eff5
	  Boot ID:                    b6a9b4fd-1abd-4ef4-a3a8-bc0c39ab4624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pc6wt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 etcd-ha-136200-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-kb2x4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-136200-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-136200-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-zj5jv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-136200-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-136200-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  RegisteredNode           12m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-136200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  RegisteredNode           8m4s               node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	
	
	Name:               ha-136200-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:06:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.216.62
	  Hostname:    ha-136200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352997c1e27d48bb8dff5ae5f17e228a
	  System UUID:                0e4a669f-6d5f-be47-a143-5d2db1558741
	  Boot ID:                    8ce378d2-4a7e-40de-aab0-8bc599c3d157
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2gr4g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 etcd-ha-136200-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m25s
	  kube-system                 kindnet-rlfkk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m27s
	  kube-system                 kube-apiserver-ha-136200-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-controller-manager-ha-136200-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-proxy-9ml9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-scheduler-ha-136200-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-vip-ha-136200-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m27s (x8 over 8m27s)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x8 over 8m27s)  kubelet          Node ha-136200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x7 over 8m27s)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m25s                  node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           8m22s                  node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           8m4s                   node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	
	
	==> dmesg <==
	[  +7.445343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 02:49] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.218573] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.318095] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.121878] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.646066] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.241331] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.276456] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.872310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.245693] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.234055] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.318386] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[May 1 02:50] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.117675] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.894847] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.744854] systemd-fstab-generator[1728]: Ignoring "noauto" option for root device
	[  +0.118239] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.246999] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.464074] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[ +14.473066] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.151247] kauditd_printk_skb: 29 callbacks suppressed
	[May 1 02:54] kauditd_printk_skb: 26 callbacks suppressed
	[May 1 03:02] hrtimer: interrupt took 2691714 ns
	
	
	==> etcd [8b0d01885db5] <==
	{"level":"warn","ts":"2024-05-01T02:58:27.32276Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e80b4c0e2412e141","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"53.82673ms"}
	{"level":"warn","ts":"2024-05-01T02:58:27.322905Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"477eb305d8136a0f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"53.975031ms"}
	{"level":"info","ts":"2024-05-01T02:58:27.32416Z","caller":"traceutil/trace.go:171","msg":"trace[1054755025] linearizableReadLoop","detail":"{readStateIndex:1749; appliedIndex:1750; }","duration":"179.427394ms","start":"2024-05-01T02:58:27.144718Z","end":"2024-05-01T02:58:27.324146Z","steps":["trace[1054755025] 'read index received'  (duration: 179.423494ms)","trace[1054755025] 'applied index is now lower than readState.Index'  (duration: 2.9µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T02:58:27.324463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.798696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-136200-m03\" ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2024-05-01T02:58:27.325782Z","caller":"traceutil/trace.go:171","msg":"trace[1458868258] range","detail":"{range_begin:/registry/minions/ha-136200-m03; range_end:; response_count:1; response_revision:1575; }","duration":"181.205807ms","start":"2024-05-01T02:58:27.144565Z","end":"2024-05-01T02:58:27.325771Z","steps":["trace[1458868258] 'agreement among raft nodes before linearized reading'  (duration: 179.804097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:27.325805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.295259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T02:58:27.327416Z","caller":"traceutil/trace.go:171","msg":"trace[1620131110] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1575; }","duration":"106.638269ms","start":"2024-05-01T02:58:27.220472Z","end":"2024-05-01T02:58:27.32711Z","steps":["trace[1620131110] 'agreement among raft nodes before linearized reading'  (duration: 105.303859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:28.207615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.283539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.28.217.218\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-05-01T02:58:28.20815Z","caller":"traceutil/trace.go:171","msg":"trace[526707853] range","detail":"{range_begin:/registry/masterleases/172.28.217.218; range_end:; response_count:1; response_revision:1578; }","duration":"227.827942ms","start":"2024-05-01T02:58:27.980307Z","end":"2024-05-01T02:58:28.208135Z","steps":["trace[526707853] 'range keys from in-memory index tree'  (duration: 226.16143ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T02:58:33.155687Z","caller":"traceutil/trace.go:171","msg":"trace[822609576] linearizableReadLoop","detail":"{readStateIndex:1773; appliedIndex:1773; }","duration":"127.106614ms","start":"2024-05-01T02:58:33.028561Z","end":"2024-05-01T02:58:33.155667Z","steps":["trace[822609576] 'read index received'  (duration: 127.096113ms)","trace[822609576] 'applied index is now lower than readState.Index'  (duration: 3.201µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-01T02:58:33.156309Z","caller":"traceutil/trace.go:171","msg":"trace[2144601308] transaction","detail":"{read_only:false; response_revision:1595; number_of_response:1; }","duration":"161.212759ms","start":"2024-05-01T02:58:32.995083Z","end":"2024-05-01T02:58:33.156296Z","steps":["trace[2144601308] 'process raft request'  (duration: 161.011858ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:33.156653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.070121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-05-01T02:58:33.156711Z","caller":"traceutil/trace.go:171","msg":"trace[302833371] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1595; }","duration":"128.172822ms","start":"2024-05-01T02:58:33.02853Z","end":"2024-05-01T02:58:33.156702Z","steps":["trace[302833371] 'agreement among raft nodes before linearized reading'  (duration: 127.786619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T02:58:33.264542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.338328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-ha-136200-m03\" ","response":"range_response_count:1 size:4512"}
	{"level":"info","ts":"2024-05-01T02:58:33.264603Z","caller":"traceutil/trace.go:171","msg":"trace[1479493783] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-ha-136200-m03; range_end:; response_count:1; response_revision:1595; }","duration":"101.45723ms","start":"2024-05-01T02:58:33.163133Z","end":"2024-05-01T02:58:33.26459Z","steps":["trace[1479493783] 'agreement among raft nodes before linearized reading'  (duration: 89.079641ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:00:22.770623Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1078}
	{"level":"info","ts":"2024-05-01T03:00:22.882389Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1078,"took":"110.812232ms","hash":3849218282,"current-db-size-bytes":3649536,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2129920,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-01T03:00:22.882504Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3849218282,"revision":1078,"compact-revision":-1}
	{"level":"info","ts":"2024-05-01T03:01:04.916293Z","caller":"traceutil/trace.go:171","msg":"trace[1983744639] transaction","detail":"{read_only:false; response_revision:2081; number_of_response:1; }","duration":"115.484567ms","start":"2024-05-01T03:01:04.80079Z","end":"2024-05-01T03:01:04.916275Z","steps":["trace[1983744639] 'process raft request'  (duration: 115.357067ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:02:03.35618Z","caller":"traceutil/trace.go:171","msg":"trace[1139546375] linearizableReadLoop","detail":"{readStateIndex:2579; appliedIndex:2579; }","duration":"135.951986ms","start":"2024-05-01T03:02:03.220209Z","end":"2024-05-01T03:02:03.356161Z","steps":["trace[1139546375] 'read index received'  (duration: 135.946186ms)","trace[1139546375] 'applied index is now lower than readState.Index'  (duration: 4.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:02:03.356787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.278387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-01T03:02:03.356854Z","caller":"traceutil/trace.go:171","msg":"trace[254823889] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2219; }","duration":"136.661889ms","start":"2024-05-01T03:02:03.220181Z","end":"2024-05-01T03:02:03.356843Z","steps":["trace[254823889] 'agreement among raft nodes before linearized reading'  (duration: 136.253587ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-01T03:05:22.799248Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1982}
	{"level":"info","ts":"2024-05-01T03:05:22.850517Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1982,"took":"50.469373ms","hash":172517741,"current-db-size-bytes":3649536,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":2031616,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-05-01T03:05:22.850635Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":172517741,"revision":1982,"compact-revision":1078}
	
	
	==> kernel <==
	 03:06:41 up 18 min,  0 users,  load average: 0.14, 0.27, 0.26
	Linux ha-136200 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c09511b7df64] <==
	I0501 03:05:52.791173       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:06:02.808090       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:06:02.808120       1 main.go:227] handling current node
	I0501 03:06:02.808133       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:06:02.808141       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:02.813888       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:06:02.813982       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:06:12.822122       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:06:12.822168       1 main.go:227] handling current node
	I0501 03:06:12.822181       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:06:12.822189       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:12.822572       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:06:12.822593       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:06:22.835550       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:06:22.835667       1 main.go:227] handling current node
	I0501 03:06:22.835687       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:06:22.836445       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:22.837191       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:06:22.837224       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:06:32.846853       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:06:32.846900       1 main.go:227] handling current node
	I0501 03:06:32.846913       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:06:32.846921       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:06:32.847466       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:06:32.847572       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ff4bf057093] <==
	Trace[670363995]: [511.709143ms] [511.709143ms] END
	I0501 02:54:22.977601       1 trace.go:236] Trace[1452834138]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,client:172.28.213.142,api-group:storage.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (01-May-2024 02:54:22.472) (total time: 504ms):
	Trace[1452834138]: ["Create etcd3" audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,key:/csinodes/ha-136200-m02,type:*storage.CSINode,resource:csinodes.storage.k8s.io 504ms (02:54:22.473)
	Trace[1452834138]:  ---"Txn call succeeded" 503ms (02:54:22.977)]
	Trace[1452834138]: [504.731076ms] [504.731076ms] END
	E0501 02:58:15.730056       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:58:15.730169       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:58:15.730071       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:58:15.731583       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:58:15.732500       1 timeout.go:142] post-timeout activity - time-elapsed: 2.647619ms, PATCH "/api/v1/namespaces/default/events/ha-136200-m03.17cb3e09c56bb983" result: <nil>
	E0501 02:59:25.456065       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61414: use of closed network connection
	E0501 02:59:26.016855       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61416: use of closed network connection
	E0501 02:59:26.743048       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61418: use of closed network connection
	E0501 02:59:27.423392       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61421: use of closed network connection
	E0501 02:59:28.036056       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61423: use of closed network connection
	E0501 02:59:28.618704       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61425: use of closed network connection
	E0501 02:59:29.166283       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61427: use of closed network connection
	E0501 02:59:29.771114       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61429: use of closed network connection
	E0501 02:59:30.328866       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61431: use of closed network connection
	E0501 02:59:31.360058       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61434: use of closed network connection
	E0501 02:59:41.926438       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61436: use of closed network connection
	E0501 02:59:42.497809       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61439: use of closed network connection
	E0501 02:59:53.089743       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61441: use of closed network connection
	E0501 02:59:53.660135       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61443: use of closed network connection
	E0501 03:00:04.225188       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61445: use of closed network connection
	
	
	==> kube-controller-manager [8fa3aa565b36] <==
	I0501 02:50:56.182254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.9µs"
	I0501 02:50:56.871742       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 02:50:58.734842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.702µs"
	I0501 02:50:58.815553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.110569ms"
	I0501 02:50:58.817069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.005µs"
	I0501 02:50:58.859853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.315916ms"
	I0501 02:50:58.862248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="191.304µs"
	I0501 02:54:21.439127       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m02\" does not exist"
	I0501 02:54:21.501101       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m02" podCIDRs=["10.244.1.0/24"]
	I0501 02:54:21.914883       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m02"
	I0501 02:58:14.901209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m03\" does not exist"
	I0501 02:58:14.933592       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m03" podCIDRs=["10.244.2.0/24"]
	I0501 02:58:16.990389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m03"
	I0501 02:59:18.914466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.158562ms"
	I0501 02:59:19.095324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.785078ms"
	I0501 02:59:19.461767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="365.331283ms"
	I0501 02:59:19.490263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.541695ms"
	I0501 02:59:19.490899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.9µs"
	I0501 02:59:21.446166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0501 02:59:21.996495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.097772ms"
	I0501 02:59:21.997082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.301µs"
	I0501 02:59:22.122170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.415164ms"
	I0501 02:59:22.122332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.3µs"
	I0501 02:59:22.485058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.861489ms"
	I0501 02:59:22.485150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.8µs"
	
	
	==> kube-proxy [562cd55ab970] <==
	I0501 02:50:44.069527       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:50:44.111745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.217.218"]
	I0501 02:50:44.171562       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:50:44.171703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:50:44.171730       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:50:44.178320       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:50:44.180232       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:50:44.180271       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:50:44.184544       1 config.go:192] "Starting service config controller"
	I0501 02:50:44.185913       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:50:44.186319       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:50:44.186555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:50:44.189915       1 config.go:319] "Starting node config controller"
	I0501 02:50:44.190110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:50:44.287624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:50:44.287761       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:50:44.290292       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6454ceb34ca] <==
	W0501 02:50:26.797411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:50:26.797624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:50:26.830216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.830267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:26.925545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:50:26.925605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:50:26.948130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.948245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.027771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:27.028119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.045542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:50:27.045577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:50:27.049002       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:50:27.049031       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:50:30.148462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 02:59:18.858485       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pc6wt" node="ha-136200-m03"
	E0501 02:59:18.859227       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" pod="default/busybox-fc5497c4f-pc6wt"
	E0501 02:59:18.932248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.932355       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10f52d20-5605-40b5-8875-ceb0cb5c2e53(default/busybox-fc5497c4f-6mlkh) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6mlkh"
	E0501 02:59:18.932383       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" pod="default/busybox-fc5497c4f-6mlkh"
	I0501 02:59:18.932412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.934021       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	E0501 02:59:18.934194       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b6febdff-c378-4d33-94ae-8b321071f921(default/busybox-fc5497c4f-2gr4g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-2gr4g"
	E0501 02:59:18.934386       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" pod="default/busybox-fc5497c4f-2gr4g"
	I0501 02:59:18.937753       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	
	
	==> kubelet <==
	May 01 03:02:29 ha-136200 kubelet[2230]: E0501 03:02:29.306486    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:02:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:02:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:02:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:02:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:03:29 ha-136200 kubelet[2230]: E0501 03:03:29.307664    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:03:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:03:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:03:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:03:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:04:29 ha-136200 kubelet[2230]: E0501 03:04:29.306136    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:04:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:04:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:04:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:04:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:05:29 ha-136200 kubelet[2230]: E0501 03:05:29.306156    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:05:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:05:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:05:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:05:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:06:29 ha-136200 kubelet[2230]: E0501 03:06:29.306327    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:06:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:06:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:06:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:06:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:06:33.543590    9964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200: (12.5940759s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-136200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (84.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (111.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 node stop m02 -v=7 --alsologtostderr: (35.7133196s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr: exit status 7 (38.732505s)

                                                
                                                
-- stdout --
	ha-136200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-136200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-136200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-136200-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:07:32.570889    3812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:07:32.663736    3812 out.go:291] Setting OutFile to fd 1004 ...
	I0501 03:07:32.664729    3812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:07:32.664729    3812 out.go:304] Setting ErrFile to fd 772...
	I0501 03:07:32.664729    3812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:07:32.681103    3812 out.go:298] Setting JSON to false
	I0501 03:07:32.681103    3812 mustload.go:65] Loading cluster: ha-136200
	I0501 03:07:32.681103    3812 notify.go:220] Checking for updates...
	I0501 03:07:32.682079    3812 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:07:32.682079    3812 status.go:255] checking status of ha-136200 ...
	I0501 03:07:32.683594    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:07:34.919138    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:07:34.919138    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:34.919366    3812 status.go:330] ha-136200 host status = "Running" (err=<nil>)
	I0501 03:07:34.919418    3812 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:07:34.920236    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:07:37.164918    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:07:37.164918    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:37.164918    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:07:39.894119    3812 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:07:39.894655    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:39.894655    3812 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:07:39.912719    3812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:07:39.912719    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:07:42.093051    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:07:42.093289    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:42.093366    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:07:44.736905    3812 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:07:44.736905    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:44.736905    3812 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 03:07:44.850248    3812 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9374918s)
	I0501 03:07:44.867784    3812 ssh_runner.go:195] Run: systemctl --version
	I0501 03:07:44.893091    3812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:07:44.922609    3812 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:07:44.922697    3812 api_server.go:166] Checking apiserver status ...
	I0501 03:07:44.936034    3812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:07:44.986848    3812 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup
	W0501 03:07:45.006485    3812 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:07:45.020669    3812 ssh_runner.go:195] Run: ls
	I0501 03:07:45.029312    3812 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:07:45.036703    3812 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:07:45.036703    3812 status.go:422] ha-136200 apiserver status = Running (err=<nil>)
	I0501 03:07:45.037000    3812 status.go:257] ha-136200 status: &{Name:ha-136200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:07:45.037000    3812 status.go:255] checking status of ha-136200-m02 ...
	I0501 03:07:45.037824    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:07:47.197947    3812 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:07:47.197947    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:47.197947    3812 status.go:330] ha-136200-m02 host status = "Stopped" (err=<nil>)
	I0501 03:07:47.197947    3812 status.go:343] host is not running, skipping remaining checks
	I0501 03:07:47.197947    3812 status.go:257] ha-136200-m02 status: &{Name:ha-136200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:07:47.197947    3812 status.go:255] checking status of ha-136200-m03 ...
	I0501 03:07:47.199257    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:07:49.446685    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:07:49.446685    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:49.446685    3812 status.go:330] ha-136200-m03 host status = "Running" (err=<nil>)
	I0501 03:07:49.446901    3812 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:07:49.447650    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:07:51.640709    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:07:51.641394    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:51.641394    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:07:54.259028    3812 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:07:54.259028    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:54.259028    3812 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:07:54.276800    3812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:07:54.277366    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:07:56.409959    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:07:56.409959    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:56.409959    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:07:59.001671    3812 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:07:59.001671    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:07:59.001822    3812 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 03:07:59.108070    3812 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8312341s)
	I0501 03:07:59.121213    3812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:07:59.149723    3812 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:07:59.149723    3812 api_server.go:166] Checking apiserver status ...
	I0501 03:07:59.164244    3812 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:07:59.207556    3812 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup
	W0501 03:07:59.231420    3812 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:07:59.246051    3812 ssh_runner.go:195] Run: ls
	I0501 03:07:59.253709    3812 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:07:59.262198    3812 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:07:59.262198    3812 status.go:422] ha-136200-m03 apiserver status = Running (err=<nil>)
	I0501 03:07:59.262198    3812 status.go:257] ha-136200-m03 status: &{Name:ha-136200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:07:59.262198    3812 status.go:255] checking status of ha-136200-m04 ...
	I0501 03:07:59.263133    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:08:01.414506    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:08:01.415218    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:08:01.415218    3812 status.go:330] ha-136200-m04 host status = "Running" (err=<nil>)
	I0501 03:08:01.415331    3812 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:08:01.416105    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:08:03.596587    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:08:03.597163    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:08:03.597163    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:08:06.204468    3812 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:08:06.204468    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:08:06.204586    3812 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:08:06.220361    3812 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:08:06.220361    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:08:08.369870    3812 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:08:08.369870    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:08:08.369955    3812 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:08:11.002919    3812 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:08:11.002919    3812 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:08:11.003596    3812 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:08:11.110175    3812 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8897775s)
	I0501 03:08:11.124341    3812 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:08:11.149772    3812 status.go:257] ha-136200-m04 status: &{Name:ha-136200-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:381: status says not three kubelets are running: args "out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr": ha-136200
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-136200-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-136200-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-136200-m04
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200: (12.6639354s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 logs -n 25: (9.1986708s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-869300 image ls           | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:42 UTC | 01 May 24 02:42 UTC |
	| delete  | -p functional-869300                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:46 UTC | 01 May 24 02:47 UTC |
	| start   | -p ha-136200 --wait=true             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:47 UTC | 01 May 24 02:58 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- apply -f             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- rollout status       | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-2gr4g -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-6mlkh -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-pc6wt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| node    | add -p ha-136200 -v=7                | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:00 UTC |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| node    | ha-136200 node stop m02 -v=7         | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:06 UTC | 01 May 24 03:07 UTC |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:47:19
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:47:19.308853    4712 out.go:291] Setting OutFile to fd 968 ...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.308853    4712 out.go:304] Setting ErrFile to fd 940...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.335053    4712 out.go:298] Setting JSON to false
	I0501 02:47:19.338050    4712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104693,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:47:19.338050    4712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:47:19.343676    4712 out.go:177] * [ha-136200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:47:19.347056    4712 notify.go:220] Checking for updates...
	I0501 02:47:19.349570    4712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:47:19.352627    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:47:19.356010    4712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:47:19.359527    4712 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:47:19.364982    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:47:19.368342    4712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:47:24.771909    4712 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:47:24.777503    4712 start.go:297] selected driver: hyperv
	I0501 02:47:24.777503    4712 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:47:24.777503    4712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:47:24.830749    4712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:47:24.832155    4712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:47:24.832679    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:47:24.832679    4712 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:47:24.832679    4712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:47:24.832944    4712 start.go:340] cluster config:
	{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:47:24.832944    4712 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:47:24.837439    4712 out.go:177] * Starting "ha-136200" primary control-plane node in "ha-136200" cluster
	I0501 02:47:24.839631    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:47:24.839631    4712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:47:24.839631    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:47:24.840411    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:47:24.840411    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:47:24.841147    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:47:24.841147    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json: {Name:mk622c10e63d8ff69d285ce96c3e57bfbed6a54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:360] acquireMachinesLock for ha-136200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200"
	I0501 02:47:24.843334    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:47:24.843334    4712 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:47:24.845982    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:47:24.845982    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:47:24.845982    4712 client.go:168] LocalClient.Create starting
	I0501 02:47:24.847217    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.847705    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:47:24.848012    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.848048    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.848190    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:47:27.058462    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:47:27.058678    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:27.058786    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:30.441172    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:34.074968    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:34.075096    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:34.077782    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:47:34.612276    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: Creating VM...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:37.663991    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:37.664390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:37.664515    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:47:37.664599    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:39.498071    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:39.498234    4712 main.go:141] libmachine: Creating VHD
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:47:43.230384    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2B9E163F-083E-4714-9C44-9A52BE438E53
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:47:43.231369    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:43.231468    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:47:43.231550    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:47:43.241482    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -SizeBytes 20000MB
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:48.971981    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:47:52.766292    4712 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-136200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:47:52.766504    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:52.766592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200 -DynamicMemoryEnabled $false
	I0501 02:47:54.972628    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200 -Count 2
	I0501 02:47:57.167635    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\boot2docker.iso'
	I0501 02:47:59.728585    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd'
	I0501 02:48:02.387014    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: Starting VM...
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:07.690543    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:11.244005    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:13.447532    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:17.014251    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:19.231015    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:22.791035    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:24.970362    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:24.970583    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:24.970826    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:27.538082    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:27.539108    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:28.540044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:30.692065    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:33.315400    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:35.454723    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:48:35.454940    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:37.590850    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:37.591294    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:37.591378    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:40.152942    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:40.153017    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:40.158939    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:40.170076    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:40.170076    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:48:40.311850    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:48:40.311938    4712 buildroot.go:166] provisioning hostname "ha-136200"
	I0501 02:48:40.312011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:42.388241    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:44.941487    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:44.942306    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:44.948681    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:44.949642    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:44.949718    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200 && echo "ha-136200" | sudo tee /etc/hostname
	I0501 02:48:45.123416    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200
	
	I0501 02:48:45.123490    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:47.248892    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:49.920164    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:49.920164    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:49.920749    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:48:50.089597    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:48:50.089597    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:48:50.089597    4712 buildroot.go:174] setting up certificates
	I0501 02:48:50.090153    4712 provision.go:84] configureAuth start
	I0501 02:48:50.090240    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:54.811881    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:59.487351    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:59.487402    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:59.487402    4712 provision.go:143] copyHostCerts
	I0501 02:48:59.487402    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:48:59.487402    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:48:59.487402    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:48:59.488365    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:48:59.489448    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:48:59.489632    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:48:59.490981    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:48:59.491187    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:48:59.492726    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200 san=[127.0.0.1 172.28.217.218 ha-136200 localhost minikube]
	I0501 02:48:59.577887    4712 provision.go:177] copyRemoteCerts
	I0501 02:48:59.596375    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:48:59.597286    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:01.699540    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:04.259427    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:04.371852    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7744315s)
	I0501 02:49:04.371852    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:49:04.371852    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:49:04.422302    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:49:04.422602    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:49:04.478176    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:49:04.478176    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:49:04.532091    4712 provision.go:87] duration metric: took 14.4416362s to configureAuth
	I0501 02:49:04.532091    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:49:04.532690    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:49:04.532690    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:06.624197    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:09.238280    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:09.238979    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:09.245381    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:09.246060    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:09.246060    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:49:09.397759    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:49:09.397835    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:49:09.398290    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:49:09.398464    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:11.514582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:14.057033    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:14.057033    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:14.057589    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:49:14.242724    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:49:14.242724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:16.392749    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:18.993701    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:18.994302    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:19.000048    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:19.000537    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:19.000616    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:49:21.256124    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:49:21.256675    4712 machine.go:97] duration metric: took 45.8016127s to provisionDockerMachine
	I0501 02:49:21.256675    4712 client.go:171] duration metric: took 1m56.4098314s to LocalClient.Create
	I0501 02:49:21.256737    4712 start.go:167] duration metric: took 1m56.4098939s to libmachine.API.Create "ha-136200"
	I0501 02:49:21.256791    4712 start.go:293] postStartSetup for "ha-136200" (driver="hyperv")
	I0501 02:49:21.256828    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:49:21.271031    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:49:21.271031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:23.374454    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:23.374634    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:23.374716    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:25.919441    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:26.030251    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.759185s)
	I0501 02:49:26.044496    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:49:26.053026    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:49:26.053160    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:49:26.053600    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:49:26.054397    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:49:26.054397    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:49:26.070942    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:49:26.091568    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:49:26.143252    4712 start.go:296] duration metric: took 4.8863885s for postStartSetup
	I0501 02:49:26.147080    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:30.792456    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:49:30.796310    4712 start.go:128] duration metric: took 2m5.952044s to createHost
	I0501 02:49:30.796483    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:32.880619    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:35.468747    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:35.469470    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:35.469470    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531775.614259884
	
	I0501 02:49:35.611947    4712 fix.go:216] guest clock: 1714531775.614259884
	I0501 02:49:35.611947    4712 fix.go:229] Guest: 2024-05-01 02:49:35.614259884 +0000 UTC Remote: 2024-05-01 02:49:30.7963907 +0000 UTC m=+131.677772001 (delta=4.817869184s)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:40.253738    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:40.254896    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:40.261655    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:40.262498    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:40.262498    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531775
	I0501 02:49:40.415406    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:49:35 UTC 2024
	
	I0501 02:49:40.415406    4712 fix.go:236] clock set: Wed May  1 02:49:35 UTC 2024
	 (err=<nil>)
	I0501 02:49:40.415406    4712 start.go:83] releasing machines lock for "ha-136200", held for 2m15.5712031s
	I0501 02:49:40.416105    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:42.459145    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:45.033478    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:45.034063    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:45.038366    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:49:45.038515    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:45.050350    4712 ssh_runner.go:195] Run: cat /version.json
	I0501 02:49:45.050350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.230427    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:47.254252    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.254469    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.254558    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:49.922691    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.922867    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.923261    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.951021    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:50.022867    4712 ssh_runner.go:235] Completed: cat /version.json: (4.9724804s)
	I0501 02:49:50.037446    4712 ssh_runner.go:195] Run: systemctl --version
	I0501 02:49:50.123463    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0850592s)
	I0501 02:49:50.137756    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:49:50.147834    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:49:50.164262    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:49:50.197825    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:49:50.197877    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.197877    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:50.246918    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:49:50.281929    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:49:50.303725    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:49:50.317480    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:49:50.354607    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.392927    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:49:50.426684    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.464924    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:49:50.501540    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:49:50.541276    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:49:50.576278    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:49:50.614209    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:49:50.653144    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:49:50.688395    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:50.921067    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:49:50.960389    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.974435    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:49:51.020319    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.063731    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:49:51.113242    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.154151    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.196182    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:49:51.267621    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.297018    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:51.359019    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:49:51.382845    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:49:51.408532    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:49:51.459482    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:49:51.703156    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:49:51.928842    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:49:51.928842    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:49:51.985157    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:52.205484    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:49:54.768628    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5631253s)
	I0501 02:49:54.782717    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:49:54.821909    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:54.861989    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:49:55.097455    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:49:55.325878    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.547674    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:49:55.604800    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:55.648909    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.873886    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:49:55.987252    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:49:56.000254    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:49:56.009412    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:49:56.021925    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:49:56.041055    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:49:56.111426    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:49:56.124879    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.172644    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.210144    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:49:56.210144    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:49:56.231590    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:49:56.237056    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:49:56.273064    4712 kubeadm.go:877] updating cluster {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:49:56.273064    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:49:56.283976    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:49:56.305563    4712 docker.go:685] Got preloaded images: 
	I0501 02:49:56.305585    4712 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:49:56.319781    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:49:56.352980    4712 ssh_runner.go:195] Run: which lz4
	I0501 02:49:56.361434    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:49:56.376111    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:49:56.383203    4712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:49:56.383203    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:49:58.545920    4712 docker.go:649] duration metric: took 2.1838816s to copy over tarball
	I0501 02:49:58.559153    4712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:50:07.024882    4712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4656661s)
	I0501 02:50:07.024882    4712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:50:07.091273    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:50:07.117701    4712 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:50:07.169927    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:07.413870    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:50:10.777827    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.363932s)
	I0501 02:50:10.787955    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:50:10.813130    4712 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:50:10.813237    4712 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:50:10.813237    4712 kubeadm.go:928] updating node { 172.28.217.218 8443 v1.30.0 docker true true} ...
	I0501 02:50:10.813471    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.217.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:50:10.824528    4712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:50:10.865306    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:10.865306    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:10.865306    4712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:50:10.865306    4712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.217.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-136200 NodeName:ha-136200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.217.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.217.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:50:10.866013    4712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.217.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-136200"
	  kubeletExtraArgs:
	    node-ip: 172.28.217.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.217.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:50:10.866164    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:50:10.879856    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:50:10.916330    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:50:10.916590    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:50:10.930144    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:50:10.946847    4712 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:50:10.960617    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:50:10.980126    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 02:50:11.015010    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:50:11.046356    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:50:11.090122    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:50:11.151082    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:50:11.158193    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:50:11.198290    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:11.421704    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:50:11.457294    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.217.218
	I0501 02:50:11.457383    4712 certs.go:194] generating shared ca certs ...
	I0501 02:50:11.457383    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.458373    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:50:11.458865    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:50:11.459136    4712 certs.go:256] generating profile certs ...
	I0501 02:50:11.459821    4712 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:50:11.459950    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt with IP's: []
	I0501 02:50:11.600094    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt ...
	I0501 02:50:11.600094    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt: {Name:mkd5e4d205a603f84158daca3df4537a47f4507f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.601407    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key ...
	I0501 02:50:11.601407    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key: {Name:mk0f41aeab078751e43122e83e5a087fadc50acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.602800    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6
	I0501 02:50:11.602800    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.223.254]
	I0501 02:50:11.735634    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 ...
	I0501 02:50:11.735634    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6: {Name:mk25daf93f931731761fc26133f1d14b1615ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.736662    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 ...
	I0501 02:50:11.736662    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6: {Name:mk2e8ec633a20ca6bf6f004cdd1aa2dc02923e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.738036    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:50:11.750002    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:50:11.751999    4712 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:50:11.751999    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt with IP's: []
	I0501 02:50:11.858892    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt ...
	I0501 02:50:11.858892    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt: {Name:mk545c7bac57fe0475c68dabf35cf7726f7ba6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.860058    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key ...
	I0501 02:50:11.860058    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key: {Name:mk197c02f3ddea53477a395060c41fac8b486d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.861502    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:50:11.862042    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:50:11.862321    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:50:11.872340    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:50:11.872340    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:50:11.873220    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:50:11.874220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:50:11.875212    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:11.877013    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:50:11.928037    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:50:11.975033    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:50:12.024768    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:50:12.069813    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:50:12.117563    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:50:12.166940    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:50:12.214744    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:50:12.264780    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:50:12.314494    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:50:12.357210    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:50:12.407402    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:50:12.460345    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:50:12.486641    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:50:12.524534    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.531940    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.545887    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.569538    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:50:12.603111    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:50:12.640545    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.648489    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.664745    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.689236    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:50:12.722220    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:50:12.763152    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.771274    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.785811    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.809601    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:50:12.843815    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:50:12.851182    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:50:12.851596    4712 kubeadm.go:391] StartCluster: {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:50:12.861439    4712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:50:12.897822    4712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:50:12.930863    4712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:50:12.967142    4712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:50:12.989079    4712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:50:12.989174    4712 kubeadm.go:156] found existing configuration files:
	
	I0501 02:50:13.002144    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:50:13.022983    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:50:13.037263    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:50:13.070061    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:50:13.088170    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:50:13.104788    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:50:13.142331    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.161295    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:50:13.176372    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.217242    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:50:13.236623    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:50:13.250242    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:50:13.273719    4712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:50:13.796086    4712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:50:29.771938    4712 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:50:29.772562    4712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:50:29.772731    4712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:50:29.772731    4712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:50:29.775841    4712 out.go:204]   - Generating certificates and keys ...
	I0501 02:50:29.775841    4712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:50:29.776550    4712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:50:29.776704    4712 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:50:29.776918    4712 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:50:29.777081    4712 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:50:29.777841    4712 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.778067    4712 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:50:29.778150    4712 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:50:29.778250    4712 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:50:29.778341    4712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:50:29.778421    4712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:50:29.778724    4712 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:50:29.778804    4712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:50:29.778987    4712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:50:29.779083    4712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:50:29.779174    4712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:50:29.779418    4712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:50:29.781433    4712 out.go:204]   - Booting up control plane ...
	I0501 02:50:29.781433    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:50:29.781986    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:50:29.782154    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:50:29.782509    4712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:50:29.782778    4712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:50:29.782833    4712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:50:29.783188    4712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:50:29.783366    4712 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:50:29.783611    4712 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.012148578s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] The API server is healthy after 9.161500426s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:50:29.784343    4712 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:50:29.784449    4712 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:50:29.784907    4712 kubeadm.go:309] [mark-control-plane] Marking the node ha-136200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:50:29.785014    4712 kubeadm.go:309] [bootstrap-token] Using token: bebbcj.jj3pub0bsd9tcu95
	I0501 02:50:29.789897    4712 out.go:204]   - Configuring RBAC rules ...
	I0501 02:50:29.789897    4712 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:50:29.791324    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:50:29.791589    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:50:29.791711    4712 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:50:29.791958    4712 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.791958    4712 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:50:29.793244    4712 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:50:29.793244    4712 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:50:29.793868    4712 kubeadm.go:309] 
	I0501 02:50:29.794174    4712 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:50:29.794395    4712 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:50:29.794428    4712 kubeadm.go:309] 
	I0501 02:50:29.794531    4712 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--control-plane 
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.795522    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:50:29.795582    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:29.795642    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:29.798321    4712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:50:29.815739    4712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:50:29.823882    4712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:50:29.823882    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:50:29.880076    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:50:30.703674    4712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200 minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=true
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:30.736553    4712 ops.go:34] apiserver oom_adj: -16
	I0501 02:50:30.914646    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.422356    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.924569    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.422489    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.916374    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.419951    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.922300    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.426730    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.915815    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.415601    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.917473    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.419572    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.923752    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.424859    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.926096    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.415957    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.915894    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.417286    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.917110    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.418538    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.919363    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.420336    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.914423    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:42.068730    4712 kubeadm.go:1107] duration metric: took 11.364941s to wait for elevateKubeSystemPrivileges
	W0501 02:50:42.068870    4712 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:50:42.068934    4712 kubeadm.go:393] duration metric: took 29.2171223s to StartCluster
	I0501 02:50:42.069035    4712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.069065    4712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:42.070934    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.072021    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:50:42.072021    4712 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:42.072021    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:50:42.072021    4712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:50:42.072021    4712 addons.go:69] Setting storage-provisioner=true in profile "ha-136200"
	I0501 02:50:42.072578    4712 addons.go:234] Setting addon storage-provisioner=true in "ha-136200"
	I0501 02:50:42.072715    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:42.072765    4712 addons.go:69] Setting default-storageclass=true in profile "ha-136200"
	I0501 02:50:42.072820    4712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-136200"
	I0501 02:50:42.073003    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:42.073773    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.074332    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.237653    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:50:42.682536    4712 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.325924    4712 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:50:44.323327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.325924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.328653    4712 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:44.328653    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:50:44.328653    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:44.329300    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:44.330211    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:50:44.331266    4712 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:50:44.331692    4712 addons.go:234] Setting addon default-storageclass=true in "ha-136200"
	I0501 02:50:44.331692    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:44.332839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.573962    4712 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:46.573962    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:50:46.573962    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.693061    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.693131    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.693256    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:48.834701    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:49.381777    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:49.540602    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:51.475208    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:51.629340    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:51.811276    4712 round_trippers.go:463] GET https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:50:51.811902    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.811902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.811902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.826458    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:50:51.827458    4712 round_trippers.go:463] PUT https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:50:51.827458    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Content-Type: application/json
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.827458    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.831221    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:50:51.834740    4712 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:50:51.838052    4712 addons.go:505] duration metric: took 9.7659586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:50:51.838052    4712 start.go:245] waiting for cluster config update ...
	I0501 02:50:51.838052    4712 start.go:254] writing updated cluster config ...
	I0501 02:50:51.841165    4712 out.go:177] 
	I0501 02:50:51.854479    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:51.854479    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.861940    4712 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 02:50:51.865640    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:50:51.865640    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:50:51.865640    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:50:51.866174    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:50:51.866462    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.868358    4712 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:50:51.868358    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 02:50:51.869005    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:51.869070    4712 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 02:50:51.871919    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:50:51.872184    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:50:51.872184    4712 client.go:168] LocalClient.Create starting
	I0501 02:50:51.872730    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:53.846893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:57.208630    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:00.949038    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:51:01.496342    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: Creating VM...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:05.182621    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:51:05.182621    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:51:07.021151    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:51:07.022208    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:07.022208    4712 main.go:141] libmachine: Creating VHD
	I0501 02:51:07.022261    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:51:10.800515    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5C7D5B1-6A19-4B92-8073-0E024A878A77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:51:10.800843    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:51:10.813657    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:14.013713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -SizeBytes 20000MB
	I0501 02:51:16.613734    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:16.613973    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:16.614122    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m02 -DynamicMemoryEnabled $false
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:22.596839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m02 -Count 2
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\boot2docker.iso'
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:27.310044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd'
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: Starting VM...
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:35.347158    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:37.880551    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:37.881580    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:38.889792    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:41.091533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:43.621201    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:43.621813    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:44.622350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:50.423751    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:52.634051    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:56.229253    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:58.425395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:01.089224    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:03.247035    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:03.247253    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:03.247291    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:52:03.247449    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:05.493082    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:08.085777    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:08.101463    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:08.101463    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:52:08.244557    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:52:08.244557    4712 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 02:52:08.244557    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:10.396068    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:12.975111    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:12.975111    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:12.975111    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 02:52:13.142328    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 02:52:13.142479    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:17.993085    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:17.993267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:18.000242    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:18.000687    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:18.000687    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:52:18.164116    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:52:18.164116    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:52:18.164235    4712 buildroot.go:174] setting up certificates
	I0501 02:52:18.164235    4712 provision.go:84] configureAuth start
	I0501 02:52:18.164235    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:20.323803    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:20.324816    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:20.324954    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:25.037258    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:25.038231    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:25.038262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:27.637529    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:27.638462    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:27.638462    4712 provision.go:143] copyHostCerts
	I0501 02:52:27.638661    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:52:27.638979    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:52:27.639093    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:52:27.639613    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:52:27.640827    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:52:27.641053    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:52:27.642372    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:52:27.642643    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:52:27.642762    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:52:27.643264    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:52:27.644242    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.213.142 ha-136200-m02 localhost minikube]
	I0501 02:52:27.843189    4712 provision.go:177] copyRemoteCerts
	I0501 02:52:27.855361    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:52:27.855361    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:29.953607    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:32.549323    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:32.549356    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:32.549913    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:32.667202    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8118058s)
	I0501 02:52:32.667353    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:52:32.667882    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:52:32.721631    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:52:32.721631    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:52:32.771533    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:52:32.772177    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:52:32.825532    4712 provision.go:87] duration metric: took 14.6610374s to configureAuth
	I0501 02:52:32.825532    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:52:32.826094    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:52:32.826229    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:34.944371    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:37.500533    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:37.500590    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:37.506891    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:37.507395    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:37.507476    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:52:37.655757    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:52:37.655757    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:52:37.655757    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:52:37.656297    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:39.803012    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:42.365445    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:42.366335    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:42.372086    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:42.372086    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:42.372086    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:52:42.560633    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:52:42.560698    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:44.724351    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:47.350624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:47.350694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:47.356560    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:47.356887    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:47.356887    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:52:49.658910    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:52:49.658910    4712 machine.go:97] duration metric: took 46.4112065s to provisionDockerMachine
	I0501 02:52:49.659442    4712 client.go:171] duration metric: took 1m57.7858628s to LocalClient.Create
	I0501 02:52:49.659442    4712 start.go:167] duration metric: took 1m57.786395s to libmachine.API.Create "ha-136200"
	I0501 02:52:49.659503    4712 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 02:52:49.659537    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:52:49.675636    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:52:49.675636    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:51.837386    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:54.474409    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:54.475041    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:54.475353    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:54.588525    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9128536s)
	I0501 02:52:54.605879    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:52:54.614578    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:52:54.614578    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:52:54.615019    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:52:54.615983    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:52:54.616061    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:52:54.630716    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:52:54.652380    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:52:54.707179    4712 start.go:296] duration metric: took 5.0475618s for postStartSetup
	I0501 02:52:54.709671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:56.858662    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:59.468337    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:59.468783    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:59.468965    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:52:59.470910    4712 start.go:128] duration metric: took 2m7.6009059s to createHost
	I0501 02:52:59.471488    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:01.642528    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:04.224906    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:04.225471    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:04.225635    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:53:04.373720    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531984.377348123
	
	I0501 02:53:04.373720    4712 fix.go:216] guest clock: 1714531984.377348123
	I0501 02:53:04.373720    4712 fix.go:229] Guest: 2024-05-01 02:53:04.377348123 +0000 UTC Remote: 2024-05-01 02:52:59.4709109 +0000 UTC m=+340.350757801 (delta=4.906437223s)
	I0501 02:53:04.373851    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:06.540324    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:09.211685    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:09.212163    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:09.212163    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531984
	I0501 02:53:09.386381    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:53:04 UTC 2024
	
	I0501 02:53:09.386381    4712 fix.go:236] clock set: Wed May  1 02:53:04 UTC 2024
	 (err=<nil>)
	I0501 02:53:09.386381    4712 start.go:83] releasing machines lock for "ha-136200-m02", held for 2m17.5170158s
	I0501 02:53:09.386381    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:11.545938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:14.175393    4712 out.go:177] * Found network options:
	I0501 02:53:14.178428    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.181305    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.183961    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.186016    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:53:14.186987    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.190185    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:53:14.190185    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:14.201210    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:53:14.201210    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:16.404382    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:19.202467    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.202936    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.203019    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.238045    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.238494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.238494    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.303673    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1023631s)
	W0501 02:53:19.303730    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:53:19.322303    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:53:19.425813    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.234512s)
	I0501 02:53:19.425813    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:53:19.425869    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:19.426179    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:19.480110    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:53:19.516304    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:53:19.540429    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:53:19.554725    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:53:19.592793    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.638122    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:53:19.676636    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.716798    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:53:19.755079    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:53:19.792962    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:53:19.828507    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:53:19.864630    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:53:19.900003    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:53:19.933687    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:20.164043    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:53:20.200981    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:20.214486    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:53:20.252522    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.291404    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:53:20.342446    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.384719    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.433485    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:53:20.493558    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.521863    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:20.572266    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:53:20.592650    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:53:20.612894    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:53:20.662972    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:53:20.893661    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:53:21.103995    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:53:21.104092    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:53:21.153897    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:21.367769    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:53:23.926290    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5584356s)
	I0501 02:53:23.942886    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:53:23.985733    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.029327    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:53:24.262777    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:53:24.474412    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:24.701708    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:53:24.747995    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.789968    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:25.013627    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:53:25.132301    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:53:25.147412    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:53:25.161719    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:53:25.177972    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:53:25.198441    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:53:25.257309    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:53:25.270183    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.317675    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.366446    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:53:25.369267    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:53:25.371205    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:53:25.380319    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:53:25.380407    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:53:25.393209    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:53:25.400057    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:25.423648    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:53:25.424611    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:53:25.425544    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:27.528822    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:27.530295    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.213.142
	I0501 02:53:27.530371    4712 certs.go:194] generating shared ca certs ...
	I0501 02:53:27.530371    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.531276    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:53:27.531739    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:53:27.531846    4712 certs.go:256] generating profile certs ...
	I0501 02:53:27.532594    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:53:27.532748    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12
	I0501 02:53:27.532985    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.223.254]
	I0501 02:53:27.709722    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 ...
	I0501 02:53:27.709722    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12: {Name:mk2a82749362965014fb3e2d8d662f7a4e7e9cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.711740    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 ...
	I0501 02:53:27.711740    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12: {Name:mkb73c4ed44f1dd1b8f82d46a1302578ac6f367b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.712120    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:53:27.726267    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:53:27.727349    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:53:27.728383    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:53:27.728582    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:53:27.728825    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:53:27.729015    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:53:27.729253    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:53:27.729653    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:53:27.729899    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:53:27.730538    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:53:27.730538    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:53:27.730927    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:53:27.731437    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:53:27.731866    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:53:27.732310    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:53:27.732905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:27.733131    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:53:27.733384    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:53:27.733671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:29.906678    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:32.470407    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:32.580880    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:53:32.588963    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:53:32.624993    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:53:32.635801    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:53:32.670832    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:53:32.678812    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:53:32.713791    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:53:32.721308    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:53:32.760244    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:53:32.767565    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:53:32.804387    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:53:32.811905    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:53:32.832394    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:53:32.885891    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:53:32.936137    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:53:32.994824    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:53:33.054042    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:53:33.105998    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:53:33.156026    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:53:33.205426    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:53:33.264385    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:53:33.316776    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:53:33.368293    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:53:33.420965    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:53:33.458001    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:53:33.499072    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:53:33.534603    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:53:33.570373    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:53:33.602430    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:53:33.635495    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:53:33.684802    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:53:33.709070    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:53:33.743711    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.750970    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.765746    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.787709    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:53:33.828429    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:53:33.866546    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.874255    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.888580    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.910501    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:53:33.948720    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:53:33.993042    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.001989    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.015762    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.040058    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:53:34.077501    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:53:34.086036    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:53:34.086573    4712 kubeadm.go:928] updating node {m02 172.28.213.142 8443 v1.30.0 docker true true} ...
	I0501 02:53:34.086726    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.213.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:53:34.086726    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:53:34.101653    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:53:34.130866    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:53:34.131029    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:53:34.145238    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.165400    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:53:34.180369    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0501 02:53:35.468257    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.481254    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.488247    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:53:35.489247    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:53:35.546630    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.559624    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.626048    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:53:35.627145    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:53:36.028150    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:53:36.077312    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.090870    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.109939    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:53:36.111871    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:53:36.821139    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:53:36.843821    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:53:36.878070    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:53:36.917707    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:53:36.971960    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:53:36.979482    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:37.020702    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:37.250249    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:53:37.282989    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:37.299000    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:53:37.299000    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:53:37.299000    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:39.433070    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:42.012855    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:42.240815    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9416996s)
	I0501 02:53:42.240889    4712 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:53:42.240889    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443"
	I0501 02:54:27.807891    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443": (45.5666728s)
	I0501 02:54:27.808014    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:54:28.660694    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m02 minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:54:28.861404    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:54:29.035785    4712 start.go:318] duration metric: took 51.7364106s to joinCluster
	I0501 02:54:29.035979    4712 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:29.038999    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:54:29.036818    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:29.055991    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:54:29.482004    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:54:29.532870    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:54:29.534181    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:54:29.534386    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:54:29.535958    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:29.536236    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:29.536236    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:29.536236    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:29.536353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:29.609745    4712 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0501 02:54:30.045557    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.045557    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.045557    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.045557    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.051535    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:30.542020    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.542083    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.542148    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.542148    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.549047    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.050630    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.050694    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.050694    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.050694    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.063209    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:31.542025    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.542098    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.542098    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.542098    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.548667    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.549663    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:32.050097    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.050097    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.050174    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.050174    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.054568    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:32.542017    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.542017    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.542017    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.542017    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.546488    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:33.050866    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.050866    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.050866    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.050866    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.056451    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:33.538033    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.538033    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.538033    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.538033    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.713541    4712 round_trippers.go:574] Response Status: 200 OK in 175 milliseconds
	I0501 02:54:33.714615    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:34.041226    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.041226    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.041226    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.041226    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.047903    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:34.547454    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.547454    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.547757    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.547757    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.552099    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.046877    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.046877    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.046877    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.046877    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.052278    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:35.548463    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.548463    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.548740    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.548740    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.558660    4712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:54:35.560213    4712 node_ready.go:49] node "ha-136200-m02" has status "Ready":"True"
	I0501 02:54:35.560213    4712 node_ready.go:38] duration metric: took 6.0241453s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:35.560332    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:35.560422    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:35.560422    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.560422    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.560422    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.572050    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:54:35.581777    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.581924    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:54:35.581924    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.581924    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.581924    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.585770    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:35.587608    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.587685    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.587685    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.587685    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.591867    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.591867    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.591867    4712 pod_ready.go:81] duration metric: took 10.0903ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:54:35.591867    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.591867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.591867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.596249    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.597880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.597964    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.597964    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.597964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.602327    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.603007    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.603007    4712 pod_ready.go:81] duration metric: took 11.1397ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.603007    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.604166    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:54:35.604211    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.604211    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.604211    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.610508    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:35.611831    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.611831    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.611831    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.611831    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.627921    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:54:35.629498    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.629498    4712 pod_ready.go:81] duration metric: took 26.4906ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:35.629498    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.629498    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.629498    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.638393    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:35.638911    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.638911    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.638911    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.639550    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.643473    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:36.140037    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.140167    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.140167    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.140167    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.148123    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:36.149580    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.149580    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.149659    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.149659    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.155530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:36.644340    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.644340    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.644340    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.644340    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.651321    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:36.652588    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.653128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.653128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.653128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.660377    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.144534    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.144656    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.144656    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.144656    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.150598    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:37.152092    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.152665    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.152665    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.152665    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.160441    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.644104    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.644239    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.644239    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.644239    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.649836    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.650604    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.650671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.650671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.650671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.654947    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.656164    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:38.142505    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.142505    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.142505    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.142505    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.149100    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.151258    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.151347    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.151347    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.151347    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.155677    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:38.643186    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.643241    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.643241    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.643241    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.650578    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.651873    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.651902    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.651902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.651902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.655946    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.142681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.142681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.142681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.142681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.148315    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:39.149953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.150203    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.150203    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.150203    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.154771    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.643364    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.643364    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.643364    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.643364    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.649703    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:39.650947    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.650947    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.651009    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.651009    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.654949    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:39.656190    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:40.142428    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.142428    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.142676    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.142676    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.148562    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.149792    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.149792    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.149792    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.149792    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.154808    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.644095    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.644095    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.644095    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.644095    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.650441    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:40.651544    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.651598    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.651598    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.651598    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.662172    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:41.143094    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.143187    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.143187    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.143187    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.148870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.150018    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.150018    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.150018    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.150018    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.156810    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:41.640508    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.640624    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.640624    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.640624    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.646018    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.646730    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.647318    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.647318    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.647318    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.652880    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.139900    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.139985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.139985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.139985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.145577    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.146383    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.146383    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.146448    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.146448    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.151141    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.151862    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:42.639271    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.639271    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.639271    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.639271    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.642318    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.646671    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.646671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.646671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.646671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.651360    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.137151    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.137496    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.137496    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.137496    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.141750    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.142959    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.142959    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.142959    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.142959    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.147560    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.641950    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.641985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.641985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.641985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.647599    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.649299    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.649350    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.649350    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.649350    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.657034    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:43.658043    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.658043    4712 pod_ready.go:81] duration metric: took 8.0284866s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:54:43.658043    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.658043    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.658043    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.664394    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.664394    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.664394    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.668848    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.669848    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.669848    4712 pod_ready.go:81] duration metric: took 11.805ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:54:43.669848    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.669848    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.670830    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.674754    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:43.676699    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.676699    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.676699    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.676699    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.681632    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.683231    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.683231    4712 pod_ready.go:81] duration metric: took 13.3825ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683231    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:54:43.683412    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.683412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.683412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.688589    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.690255    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.690255    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.690325    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.690325    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.695853    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.696818    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.696860    4712 pod_ready.go:81] duration metric: took 13.6296ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696912    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696993    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:54:43.697029    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.697029    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.697029    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.701912    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.703032    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.703736    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.703736    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.703736    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.706383    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:54:43.707734    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.707824    4712 pod_ready.go:81] duration metric: took 10.9115ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.707824    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.845210    4712 request.go:629] Waited for 137.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.845681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.845681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.851000    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.047046    4712 request.go:629] Waited for 194.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.047200    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.047200    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.052548    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.053735    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.053735    4712 pod_ready.go:81] duration metric: took 345.9086ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.053735    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.250128    4712 request.go:629] Waited for 196.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.250128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.250128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.254761    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:44.456435    4712 request.go:629] Waited for 200.6839ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.456435    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.456435    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.461480    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.462518    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.462578    4712 pod_ready.go:81] duration metric: took 408.7057ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.462578    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.648779    4712 request.go:629] Waited for 185.8104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.648953    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.649128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.654457    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.855621    4712 request.go:629] Waited for 199.4812ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.855706    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.855706    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.861147    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.861147    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.861699    4712 pod_ready.go:81] duration metric: took 399.1179ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.861778    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.042766    4712 request.go:629] Waited for 180.9309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.042766    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.042766    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.047379    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:45.244553    4712 request.go:629] Waited for 197.0101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.244553    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.244553    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.250870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:45.252485    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:45.252485    4712 pod_ready.go:81] duration metric: took 390.7033ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.252547    4712 pod_ready.go:38] duration metric: took 9.6921442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:45.252619    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:54:45.266607    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:45.298538    4712 api_server.go:72] duration metric: took 16.2624407s to wait for apiserver process to appear ...
	I0501 02:54:45.298538    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:54:45.298642    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:54:45.308804    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:54:45.308804    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:54:45.308804    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.308804    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.308804    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.310764    4712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:54:45.311165    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:54:45.311238    4712 api_server.go:131] duration metric: took 12.7003ms to wait for apiserver health ...
	I0501 02:54:45.311238    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:54:45.446869    4712 request.go:629] Waited for 135.3903ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.446869    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.446869    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.455463    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:45.466055    4712 system_pods.go:59] 17 kube-system pods found
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.466055    4712 system_pods.go:74] duration metric: took 154.8157ms to wait for pod list to return data ...
	I0501 02:54:45.466055    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:54:45.650374    4712 request.go:629] Waited for 183.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.650566    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.650566    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.661544    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:45.662734    4712 default_sa.go:45] found service account: "default"
	I0501 02:54:45.662869    4712 default_sa.go:55] duration metric: took 196.812ms for default service account to be created ...
	I0501 02:54:45.662869    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:54:45.853192    4712 request.go:629] Waited for 189.9269ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.853419    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.853419    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.865601    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:45.872777    4712 system_pods.go:86] 17 kube-system pods found
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.873383    4712 system_pods.go:126] duration metric: took 210.5126ms to wait for k8s-apps to be running ...
	I0501 02:54:45.873383    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:54:45.886040    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:45.914966    4712 system_svc.go:56] duration metric: took 41.5829ms WaitForService to wait for kubelet
	I0501 02:54:45.915054    4712 kubeadm.go:576] duration metric: took 16.8789526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:54:45.915054    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:54:46.043164    4712 request.go:629] Waited for 127.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:46.043164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:46.043310    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:46.050320    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:105] duration metric: took 136.4457ms to run NodePressure ...
	I0501 02:54:46.051501    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:54:46.051501    4712 start.go:254] writing updated cluster config ...
	I0501 02:54:46.055981    4712 out.go:177] 
	I0501 02:54:46.073210    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:46.073681    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.079155    4712 out.go:177] * Starting "ha-136200-m03" control-plane node in "ha-136200" cluster
	I0501 02:54:46.082550    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:54:46.082550    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:54:46.083028    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:54:46.083223    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:54:46.083384    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.091748    4712 start.go:360] acquireMachinesLock for ha-136200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:54:46.091748    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m03"
	I0501 02:54:46.091748    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:46.091748    4712 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0501 02:54:46.099863    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:54:46.100178    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:54:46.100178    4712 client.go:168] LocalClient.Create starting
	I0501 02:54:46.100178    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101128    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:54:49.970242    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:51.553966    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:55.358013    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:54:55.879042    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: Creating VM...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:58.933270    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:54:58.933728    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:55:00.789675    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:00.789938    4712 main.go:141] libmachine: Creating VHD
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:55:04.583967    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AAB86B48-3D75-4842-8FF8-3BDEC4AB86C2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:55:04.584134    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:55:04.594277    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -SizeBytes 20000MB
	I0501 02:55:10.391210    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:10.391245    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:10.391352    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:14.151882    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m03 -DynamicMemoryEnabled $false
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:16.478022    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m03 -Count 2
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:18.717850    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\boot2docker.iso'
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd'
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:24.025533    4712 main.go:141] libmachine: Starting VM...
	I0501 02:55:24.025533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m03
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:27.131722    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:55:27.131722    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:29.452098    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:29.453035    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:29.453089    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:32.025441    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:32.026234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:33.036612    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:37.849230    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:37.849355    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:38.854379    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:44.621333    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:49.475402    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:49.476316    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:50.480573    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:52.724713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:55.379189    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:57.536246    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:55:57.536246    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:59.681292    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:59.681842    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:59.682021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:02.302435    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:02.303031    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:02.303031    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:56:02.440858    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:56:02.440919    4712 buildroot.go:166] provisioning hostname "ha-136200-m03"
	I0501 02:56:02.440919    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:04.541126    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:07.118513    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:07.119097    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:07.119097    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m03 && echo "ha-136200-m03" | sudo tee /etc/hostname
	I0501 02:56:07.274395    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m03
	
	I0501 02:56:07.274395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:09.427222    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:12.066151    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:12.066558    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:12.072701    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:12.073263    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:12.073263    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:56:12.226572    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:56:12.226572    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:56:12.226572    4712 buildroot.go:174] setting up certificates
	I0501 02:56:12.226572    4712 provision.go:84] configureAuth start
	I0501 02:56:12.226572    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:14.383697    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:14.383832    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:14.383916    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:17.017056    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:19.246383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:21.887343    4712 provision.go:143] copyHostCerts
	I0501 02:56:21.887688    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:56:21.887688    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:56:21.887688    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:56:21.888470    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:56:21.889606    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:56:21.890069    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:56:21.890132    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:56:21.890553    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:56:21.891611    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:56:21.891800    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:56:21.891800    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:56:21.892337    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:56:21.893162    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m03 san=[127.0.0.1 172.28.216.62 ha-136200-m03 localhost minikube]
	I0501 02:56:21.973101    4712 provision.go:177] copyRemoteCerts
	I0501 02:56:21.993116    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:56:21.993116    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:24.170031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:26.830749    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:26.831099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:26.831162    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:26.935784    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9426327s)
	I0501 02:56:26.935784    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:56:26.936266    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:56:26.985792    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:56:26.986191    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:56:27.035460    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:56:27.036450    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:56:27.092775    4712 provision.go:87] duration metric: took 14.8660953s to configureAuth
	I0501 02:56:27.092775    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:56:27.093873    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:56:27.094011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:29.214442    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:31.848020    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:31.848124    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:31.859047    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:31.859047    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:31.859047    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:56:31.983811    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:56:31.983936    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:56:31.984160    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:56:31.984160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:34.146837    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:36.793925    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:36.794747    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:36.801153    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:36.801782    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:36.801782    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	Environment="NO_PROXY=172.28.217.218,172.28.213.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:56:36.960579    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	Environment=NO_PROXY=172.28.217.218,172.28.213.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:56:36.960579    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:39.141800    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:41.765565    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:41.766216    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:41.774239    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:41.774411    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:41.774411    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:56:43.994230    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:56:43.994313    4712 machine.go:97] duration metric: took 46.4577313s to provisionDockerMachine
	I0501 02:56:43.994313    4712 client.go:171] duration metric: took 1m57.8932783s to LocalClient.Create
	I0501 02:56:43.994313    4712 start.go:167] duration metric: took 1m57.8932783s to libmachine.API.Create "ha-136200"
	I0501 02:56:43.994428    4712 start.go:293] postStartSetup for "ha-136200-m03" (driver="hyperv")
	I0501 02:56:43.994473    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:56:44.010383    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:56:44.010383    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:46.225048    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:46.225772    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:46.225844    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:48.919679    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:49.032380    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0219067s)
	I0501 02:56:49.045700    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:56:49.054180    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:56:49.054180    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:56:49.054700    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:56:49.055002    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:56:49.055574    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:56:49.071048    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:56:49.092423    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:56:49.143151    4712 start.go:296] duration metric: took 5.1486851s for postStartSetup
	I0501 02:56:49.146034    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:51.349851    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:51.350067    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:51.350153    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:54.016657    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:54.017149    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:54.017380    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:56:54.019460    4712 start.go:128] duration metric: took 2m7.9267809s to createHost
	I0501 02:56:54.019460    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:58.818618    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:58.819348    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:58.819348    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:56:58.949732    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532218.937413126
	
	I0501 02:56:58.949732    4712 fix.go:216] guest clock: 1714532218.937413126
	I0501 02:56:58.949732    4712 fix.go:229] Guest: 2024-05-01 02:56:58.937413126 +0000 UTC Remote: 2024-05-01 02:56:54.0194605 +0000 UTC m=+574.897601601 (delta=4.917952626s)
	I0501 02:56:58.949941    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:01.096436    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:03.657161    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:57:03.657803    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:57:03.657803    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532218
	I0501 02:57:03.807080    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:56:58 UTC 2024
	
	I0501 02:57:03.807174    4712 fix.go:236] clock set: Wed May  1 02:56:58 UTC 2024
	 (err=<nil>)
	I0501 02:57:03.807174    4712 start.go:83] releasing machines lock for "ha-136200-m03", held for 2m17.7144231s
	I0501 02:57:03.807438    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:08.605250    4712 out.go:177] * Found network options:
	I0501 02:57:08.607292    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.612307    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.619160    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:57:08.619160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:08.631565    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:57:08.631565    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:10.838360    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838735    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.624235    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.648439    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.648490    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.648768    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.732596    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1009937s)
	W0501 02:57:13.732596    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:57:13.748662    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:57:13.811529    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:57:13.811529    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:13.811529    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1923313s)
	I0501 02:57:13.812665    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:13.867675    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:57:13.906069    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:57:13.929632    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:57:13.947027    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:57:13.986248    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.024920    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:57:14.061978    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.099821    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:57:14.138543    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:57:14.181270    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:57:14.217808    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:57:14.261794    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:57:14.297051    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:57:14.332222    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:14.558529    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:57:14.595594    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:14.610122    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:57:14.650440    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.689246    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:57:14.740013    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.780524    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.822987    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:57:14.889904    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.919061    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:14.983590    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:57:15.008856    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:57:15.032703    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:57:15.086991    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:57:15.324922    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:57:15.542551    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:57:15.542551    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:57:15.594658    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:15.826063    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:57:18.399291    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5732092s)
	I0501 02:57:18.412657    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:57:18.452282    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:18.491033    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:57:18.702768    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:57:18.928695    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.145438    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:57:19.199070    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:19.242280    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.475811    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:57:19.598548    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:57:19.612590    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:57:19.624279    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:57:19.637235    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:57:19.657683    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:57:19.721351    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:57:19.734095    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.784976    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.822576    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:57:19.826041    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:57:19.827741    4712 out.go:177]   - env NO_PROXY=172.28.217.218,172.28.213.142
	I0501 02:57:19.831635    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:57:19.851676    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:57:19.858242    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:19.883254    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:57:19.883656    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:57:19.884140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:22.018331    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:22.018592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:22.018658    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:22.019393    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.216.62
	I0501 02:57:22.019393    4712 certs.go:194] generating shared ca certs ...
	I0501 02:57:22.019393    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.020318    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:57:22.020786    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:57:22.021028    4712 certs.go:256] generating profile certs ...
	I0501 02:57:22.021028    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:57:22.021606    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9
	I0501 02:57:22.021767    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.216.62 172.28.223.254]
	I0501 02:57:22.149544    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 ...
	I0501 02:57:22.149544    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9: {Name:mk4837fbdb29e34df2c44991c600cda784a93d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.150373    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 ...
	I0501 02:57:22.150373    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9: {Name:mkcff5432d26e17c25cf2a9709eb4553a31e99c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.152472    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:57:22.165924    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:57:22.166444    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:57:22.166444    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:57:22.167623    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:57:22.168122    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:57:22.168280    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:57:22.169490    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:57:22.169490    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:57:22.170595    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:57:22.170869    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:57:22.171164    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:57:22.171434    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:57:22.171670    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:57:22.172911    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:24.374904    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:26.980450    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:27.093857    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:57:27.102183    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:57:27.141690    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:57:27.150194    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:57:27.193806    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:57:27.202957    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:57:27.254044    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:57:27.262605    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:57:27.303214    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:57:27.310453    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:57:27.348966    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:57:27.356382    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:57:27.383468    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:57:27.437872    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:57:27.494095    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:57:27.544977    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:57:27.599083    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:57:27.652123    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:57:27.710792    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:57:27.766379    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:57:27.817284    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:57:27.867949    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:57:27.930560    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:57:27.987875    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:57:28.025174    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:57:28.061492    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:57:28.099323    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:57:28.133169    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:57:28.168585    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:57:28.223450    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:57:28.292690    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:57:28.315882    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:57:28.353000    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.365096    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.379858    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.406814    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:57:28.445706    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:57:28.482484    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.491120    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.507367    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.535421    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:57:28.574647    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:57:28.616757    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.624484    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.642485    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.665148    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:57:28.706630    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:57:28.714508    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:57:28.714998    4712 kubeadm.go:928] updating node {m03 172.28.216.62 8443 v1.30.0 docker true true} ...
	I0501 02:57:28.715189    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.216.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:57:28.715218    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:57:28.727524    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:57:28.767475    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:57:28.767631    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:57:28.783398    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.801741    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:57:28.815792    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:57:28.838101    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:57:28.838226    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:57:28.838396    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.855124    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:57:28.856182    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.858128    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.881905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.881905    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:57:28.882027    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:57:28.882165    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:57:28.882277    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:57:28.898781    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.959439    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:57:28.959688    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:57:30.251192    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:57:30.272192    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:57:30.311119    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:57:30.353248    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:57:30.407414    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:57:30.415360    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:30.454450    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:30.696464    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:57:30.737201    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:30.801844    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:57:30.802126    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:57:30.802234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:32.962279    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:35.601356    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:35.838006    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0358438s)
	I0501 02:57:35.838006    4712 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:57:35.838006    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443"
	I0501 02:58:21.819619    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443": (45.981274s)
	I0501 02:58:21.819619    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:58:22.593318    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m03 minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:58:22.788566    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:58:22.987611    4712 start.go:318] duration metric: took 52.1853822s to joinCluster
	I0501 02:58:22.987895    4712 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:58:23.012496    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:58:22.988142    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:58:23.031751    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:58:23.569395    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:58:23.619961    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:58:23.620228    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:58:23.620770    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:58:23.621670    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:23.621910    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:23.621910    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:23.621982    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:23.621982    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:23.637444    4712 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:58:24.133658    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.133658    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.133658    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.133658    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.139465    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:24.622867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.622867    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.622867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.622867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.629524    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.129429    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.129429    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.129510    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.129510    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.135754    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.633954    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.633954    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.633954    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.633954    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.638650    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:25.639656    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:26.123894    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.123894    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.123894    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.123894    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.129103    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:26.629161    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.629161    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.629161    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.629161    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.648167    4712 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 02:58:27.136028    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.136028    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.136028    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.136028    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.326021    4712 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0501 02:58:27.623480    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.623600    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.623600    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.623600    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.629035    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:28.136433    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.136433    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.136626    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.136626    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.203923    4712 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0501 02:58:28.205553    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:28.636021    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.636185    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.636185    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.636185    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.646735    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:29.122451    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.122515    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.122515    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.122515    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.140826    4712 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:58:29.629756    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.629756    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.629756    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.629756    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.637588    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:30.132174    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.132269    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.132269    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.132269    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.136921    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:30.632939    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.633022    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.633022    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.633022    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.638815    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:30.640044    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:31.133378    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.133378    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.133378    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.133378    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.138754    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:31.633444    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.633511    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.633511    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.633511    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.639686    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:32.131317    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.131317    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.131317    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.131317    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.136414    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:32.629649    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.629649    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.629649    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.629649    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.634980    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:33.129878    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.129878    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.129878    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.129878    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.155125    4712 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0501 02:58:33.156557    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:33.629865    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.630060    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.630060    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.630060    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.636368    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:34.128412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.128412    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.128412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.128412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.133022    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:34.629333    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.629333    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.629333    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.629333    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.635358    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.129272    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.129376    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.129376    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.129376    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.136662    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.137446    4712 node_ready.go:49] node "ha-136200-m03" has status "Ready":"True"
	I0501 02:58:35.137492    4712 node_ready.go:38] duration metric: took 11.5157372s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:35.137492    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:35.137635    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:35.137635    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.137635    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.137635    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.149133    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:58:35.158917    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.159445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:58:35.159565    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.159565    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.159651    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.170650    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:35.172026    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.172026    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.172026    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.172026    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:58:35.180770    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.180770    4712 pod_ready.go:81] duration metric: took 21.3241ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:58:35.180770    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.180770    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.185805    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:35.187056    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.187056    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.187056    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.187056    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.191361    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.192405    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.192405    4712 pod_ready.go:81] duration metric: took 11.6358ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:58:35.192405    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.192405    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.192405    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.196117    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.197312    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.197312    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.197389    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.197389    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.201195    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.201924    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.201924    4712 pod_ready.go:81] duration metric: took 9.5188ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.201924    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.202054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:58:35.202195    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.202195    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.202195    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.208450    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.209323    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:35.209323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.209323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.209323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.212935    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.214190    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.214190    4712 pod_ready.go:81] duration metric: took 12.2652ms for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.214190    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.330301    4712 request.go:629] Waited for 115.8713ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.330574    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.330574    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.338021    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.534070    4712 request.go:629] Waited for 194.5208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.534353    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.534353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.540932    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.541927    4712 pod_ready.go:92] pod "etcd-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.541927    4712 pod_ready.go:81] duration metric: took 327.673ms for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.541927    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.737879    4712 request.go:629] Waited for 195.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.738683    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.738683    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.743861    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.940254    4712 request.go:629] Waited for 195.0246ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.940254    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.940254    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.943091    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:35.949355    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.949355    4712 pod_ready.go:81] duration metric: took 407.425ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.949355    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.143537    4712 request.go:629] Waited for 193.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143801    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143835    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.143835    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.143835    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.149992    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.331653    4712 request.go:629] Waited for 180.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.331653    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.331653    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.337290    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.338458    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.338521    4712 pod_ready.go:81] duration metric: took 389.1629ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.338521    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.533514    4712 request.go:629] Waited for 194.8709ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.533967    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.534181    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.534181    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.534181    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.548236    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:58:36.737561    4712 request.go:629] Waited for 188.1304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737864    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737942    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.737942    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.738002    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.742410    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:36.743400    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.743400    4712 pod_ready.go:81] duration metric: took 404.8131ms for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.743400    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.942223    4712 request.go:629] Waited for 198.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.942445    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.942445    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.947749    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.131974    4712 request.go:629] Waited for 183.3149ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132232    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.132323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.132323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.137476    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.138446    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.138446    4712 pod_ready.go:81] duration metric: took 395.044ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.138446    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.333778    4712 request.go:629] Waited for 195.2258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.334044    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.334044    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.338704    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:37.538179    4712 request.go:629] Waited for 197.0874ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538437    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538500    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.538500    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.538500    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.544773    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:37.544773    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.544773    4712 pod_ready.go:81] duration metric: took 406.3235ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.544773    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.743876    4712 request.go:629] Waited for 199.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.744106    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.744106    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.749628    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.931954    4712 request.go:629] Waited for 180.0772ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932132    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.932132    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.932132    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.937302    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.937875    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.937875    4712 pod_ready.go:81] duration metric: took 393.0991ms for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.937875    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.134928    4712 request.go:629] Waited for 196.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.134928    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.135164    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.135164    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.135164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.151320    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:58:38.340422    4712 request.go:629] Waited for 186.7144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.340523    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.340523    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.344835    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:38.346933    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.347124    4712 pod_ready.go:81] duration metric: took 409.2461ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.347124    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.529397    4712 request.go:629] Waited for 182.0139ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529776    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.529776    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.529776    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.535530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.733704    4712 request.go:629] Waited for 197.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.733854    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.733854    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.739456    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.741035    4712 pod_ready.go:92] pod "kube-proxy-9ml9x" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.741035    4712 pod_ready.go:81] duration metric: took 393.9082ms for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.741141    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.936294    4712 request.go:629] Waited for 194.9804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.936492    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.936492    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.941904    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.139076    4712 request.go:629] Waited for 195.5675ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.139516    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.139590    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.146156    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:39.146839    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.147389    4712 pod_ready.go:81] duration metric: took 406.2452ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.147389    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.331771    4712 request.go:629] Waited for 183.3466ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.331839    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.331839    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.338962    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:39.529638    4712 request.go:629] Waited for 189.8551ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.529880    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.529880    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.535423    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.536281    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.536496    4712 pod_ready.go:81] duration metric: took 389.1041ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.536496    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.733532    4712 request.go:629] Waited for 196.8225ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733532    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733755    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.733755    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.733755    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.738768    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.936556    4712 request.go:629] Waited for 196.8464ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.936957    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.937066    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.942275    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.942447    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.943009    4712 pod_ready.go:81] duration metric: took 406.5101ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.943009    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.137743    4712 request.go:629] Waited for 194.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.138045    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.138045    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.143795    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.340161    4712 request.go:629] Waited for 194.6485ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.340368    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.340368    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.346127    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.347243    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:40.347243    4712 pod_ready.go:81] duration metric: took 404.2307ms for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.347243    4712 pod_ready.go:38] duration metric: took 5.2097122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:40.347243    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:58:40.361809    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:58:40.399669    4712 api_server.go:72] duration metric: took 17.4115847s to wait for apiserver process to appear ...
	I0501 02:58:40.399766    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:58:40.399822    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:58:40.410080    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:58:40.410375    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:58:40.410503    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.410503    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.410503    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.412638    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:40.413725    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:58:40.413725    4712 api_server.go:131] duration metric: took 13.9591ms to wait for apiserver health ...
	I0501 02:58:40.413725    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:58:40.543767    4712 request.go:629] Waited for 129.9651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.543975    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.543975    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.554206    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.565423    4712 system_pods.go:59] 24 kube-system pods found
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.566039    4712 system_pods.go:74] duration metric: took 152.3128ms to wait for pod list to return data ...
	I0501 02:58:40.566039    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:58:40.731110    4712 request.go:629] Waited for 164.8435ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.731110    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.731110    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.736937    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.737529    4712 default_sa.go:45] found service account: "default"
	I0501 02:58:40.737568    4712 default_sa.go:55] duration metric: took 171.5277ms for default service account to be created ...
	I0501 02:58:40.737568    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:58:40.936328    4712 request.go:629] Waited for 198.4062ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.936390    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.936390    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.946796    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.961809    4712 system_pods.go:86] 24 kube-system pods found
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.962497    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.962521    4712 system_pods.go:126] duration metric: took 224.9515ms to wait for k8s-apps to be running ...
	I0501 02:58:40.962521    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:58:40.975583    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:58:41.007354    4712 system_svc.go:56] duration metric: took 44.7618ms WaitForService to wait for kubelet
	I0501 02:58:41.007354    4712 kubeadm.go:576] duration metric: took 18.0193266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:58:41.007354    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:58:41.140806    4712 request.go:629] Waited for 133.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140922    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140964    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:41.140964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:41.141046    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:41.151428    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:41.153995    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154053    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154053    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:105] duration metric: took 146.7575ms to run NodePressure ...
	I0501 02:58:41.154113    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:58:41.154113    4712 start.go:254] writing updated cluster config ...
	I0501 02:58:41.168562    4712 ssh_runner.go:195] Run: rm -f paused
	I0501 02:58:41.321592    4712 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:58:41.326673    4712 out.go:177] * Done! kubectl is now configured to use "ha-136200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 02:59:21 ha-136200 dockerd[1335]: time="2024-05-01T02:59:21.649852992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:41 ha-136200 dockerd[1329]: 2024/05/01 03:06:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:41 ha-136200 dockerd[1329]: 2024/05/01 03:06:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:41 ha-136200 dockerd[1329]: 2024/05/01 03:06:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb23816e7b6b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 minutes ago       Running             busybox                   0                   c61d49828a30c       busybox-fc5497c4f-6mlkh
	229343dc7dba5       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   0                   54bbf0662d422       coredns-7db6d8ff4d-rm4gm
	247f815bf0531       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       0                   aaa3d1f50041e       storage-provisioner
	822aaf8c270e3       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   0                   cadf8314e6ab7       coredns-7db6d8ff4d-2j8mj
	c09511b7df643       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              17 minutes ago      Running             kindnet-cni               0                   bdd01e6cca1ed       kindnet-sj2rc
	562cd55ab9702       a0bf559e280cf                                                                                         17 minutes ago      Running             kube-proxy                0                   579e0dba427c2       kube-proxy-8f67k
	1c063bfe224cd       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     18 minutes ago      Running             kube-vip                  0                   7f28f99b3c8a8       kube-vip-ha-136200
	b6454ceb34cad       259c8277fcbbc                                                                                         18 minutes ago      Running             kube-scheduler            0                   e6cf1f3e651b3       kube-scheduler-ha-136200
	8ff4bf0570939       c42f13656d0b2                                                                                         18 minutes ago      Running             kube-apiserver            0                   2455e947d4906       kube-apiserver-ha-136200
	8fa3aa565b366       c7aad43836fa5                                                                                         18 minutes ago      Running             kube-controller-manager   0                   c7e42fd34e247       kube-controller-manager-ha-136200
	8b0d01885db55       3861cfcd7c04c                                                                                         18 minutes ago      Running             etcd                      0                   da46759fd8e15       etcd-ha-136200
	
	
	==> coredns [229343dc7dba] <==
	[INFO] 10.244.1.2:38893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.138771945s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000276501s
	[INFO] 10.244.1.2:46275 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000672s
	[INFO] 10.244.2.2:34687 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040099987s
	[INFO] 10.244.2.2:56378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000284202s
	[INFO] 10.244.2.2:56092 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000345802s
	[INFO] 10.244.2.2:52745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000349302s
	[INFO] 10.244.2.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095201s
	[INFO] 10.244.0.4:51567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000267102s
	[INFO] 10.244.0.4:33148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178701s
	[INFO] 10.244.1.2:43398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000089301s
	[INFO] 10.244.1.2:52211 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001122s
	[INFO] 10.244.1.2:35470 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013228661s
	[INFO] 10.244.1.2:40781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174701s
	[INFO] 10.244.1.2:45257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000274201s
	[INFO] 10.244.1.2:36114 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165601s
	[INFO] 10.244.2.2:56600 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000371102s
	[INFO] 10.244.2.2:39742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000250502s
	[INFO] 10.244.0.4:45933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116901s
	[INFO] 10.244.0.4:53681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082001s
	[INFO] 10.244.2.2:38830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232701s
	[INFO] 10.244.0.4:51196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001489507s
	[INFO] 10.244.0.4:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264301s
	[INFO] 10.244.0.4:43314 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.013461063s
	[INFO] 10.244.1.2:41778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092301s
	
	
	==> coredns [822aaf8c270e] <==
	[INFO] 10.244.2.2:41813 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217501s
	[INFO] 10.244.2.2:54888 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032885853s
	[INFO] 10.244.0.4:49712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126101s
	[INFO] 10.244.0.4:55974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012564658s
	[INFO] 10.244.0.4:45253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139901s
	[INFO] 10.244.0.4:60045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001515s
	[INFO] 10.244.0.4:39879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175501s
	[INFO] 10.244.0.4:42089 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000310501s
	[INFO] 10.244.1.2:53821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111101s
	[INFO] 10.244.1.2:42651 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116201s
	[INFO] 10.244.2.2:34505 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.2.2:54873 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001606s
	[INFO] 10.244.0.4:60573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001105s
	[INFO] 10.244.0.4:37086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000727s
	[INFO] 10.244.1.2:52370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123901s
	[INFO] 10.244.1.2:35190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277501s
	[INFO] 10.244.1.2:42611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158301s
	[INFO] 10.244.1.2:36993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000280201s
	[INFO] 10.244.2.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206701s
	[INFO] 10.244.2.2:37229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092101s
	[INFO] 10.244.2.2:56027 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001251s
	[INFO] 10.244.0.4:55246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211601s
	[INFO] 10.244.1.2:57784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270801s
	[INFO] 10.244.1.2:39482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001183s
	[INFO] 10.244.1.2:53277 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078801s
	
	
	==> describe nodes <==
	Name:               ha-136200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:08:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:38 +0000   Wed, 01 May 2024 02:50:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.217.218
	  Hostname:    ha-136200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd5a02b3729c454c81fac1ddb77470ea
	  System UUID:                feb48805-7018-ee45-9dd1-70d50cb8dabe
	  Boot ID:                    f931e3ee-8c2d-4859-8d97-8671a4247530
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6mlkh              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 coredns-7db6d8ff4d-2j8mj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-rm4gm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-136200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-sj2rc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-136200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-136200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-8f67k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-136200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-136200                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-136200 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  RegisteredNode           9m55s              node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	
	
	Name:               ha-136200-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:54:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:07:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.213.142
	  Hostname:    ha-136200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b20b8a63378b4be990a267d65bc5017b
	  System UUID:                f54ef658-ded9-8245-9d86-fec94474eff5
	  Boot ID:                    b6a9b4fd-1abd-4ef4-a3a8-bc0c39ab4624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pc6wt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 etcd-ha-136200-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-kb2x4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-136200-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-136200-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-zj5jv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-136200-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-136200-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  RegisteredNode           14m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-136200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  RegisteredNode           9m55s              node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeNotReady             45s                node-controller  Node ha-136200-m02 status is now: NodeNotReady
	
	
	Name:               ha-136200-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:04:51 +0000   Wed, 01 May 2024 02:58:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.216.62
	  Hostname:    ha-136200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352997c1e27d48bb8dff5ae5f17e228a
	  System UUID:                0e4a669f-6d5f-be47-a143-5d2db1558741
	  Boot ID:                    8ce378d2-4a7e-40de-aab0-8bc599c3d157
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2gr4g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 etcd-ha-136200-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-rlfkk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-136200-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-136200-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-9ml9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-136200-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-136200-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-136200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           9m55s              node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	
	
	==> dmesg <==
	[  +7.445343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 02:49] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.218573] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.318095] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.121878] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.646066] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.241331] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.276456] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.872310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.245693] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.234055] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.318386] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[May 1 02:50] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.117675] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.894847] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.744854] systemd-fstab-generator[1728]: Ignoring "noauto" option for root device
	[  +0.118239] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.246999] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.464074] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[ +14.473066] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.151247] kauditd_printk_skb: 29 callbacks suppressed
	[May 1 02:54] kauditd_printk_skb: 26 callbacks suppressed
	[May 1 03:02] hrtimer: interrupt took 2691714 ns
	
	
	==> etcd [8b0d01885db5] <==
	{"level":"warn","ts":"2024-05-01T03:08:32.224354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.269627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.359491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.368432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.368613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.37888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.398995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.417942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.424674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.443711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.453566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.463585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.468189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.479749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.485072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.504298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.514123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.522554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.528703Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.534234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.547657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.555853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.569698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.573662Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:08:32.634487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:08:32 up 20 min,  0 users,  load average: 1.47, 0.73, 0.43
	Linux ha-136200 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c09511b7df64] <==
	I0501 03:07:43.233479       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:07:53.249228       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:07:53.249283       1 main.go:227] handling current node
	I0501 03:07:53.249355       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:07:53.249370       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:07:53.249545       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:07:53.249581       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:08:03.268097       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:08:03.268197       1 main.go:227] handling current node
	I0501 03:08:03.268227       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:08:03.268235       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:08:03.268640       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:08:03.268699       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:08:13.278833       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:08:13.279123       1 main.go:227] handling current node
	I0501 03:08:13.279312       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:08:13.279559       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:08:13.279909       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:08:13.280148       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:08:23.301155       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:08:23.301310       1 main.go:227] handling current node
	I0501 03:08:23.301327       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:08:23.301336       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:08:23.301526       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:08:23.305031       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ff4bf057093] <==
	Trace[670363995]: [511.709143ms] [511.709143ms] END
	I0501 02:54:22.977601       1 trace.go:236] Trace[1452834138]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,client:172.28.213.142,api-group:storage.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (01-May-2024 02:54:22.472) (total time: 504ms):
	Trace[1452834138]: ["Create etcd3" audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,key:/csinodes/ha-136200-m02,type:*storage.CSINode,resource:csinodes.storage.k8s.io 504ms (02:54:22.473)
	Trace[1452834138]:  ---"Txn call succeeded" 503ms (02:54:22.977)]
	Trace[1452834138]: [504.731076ms] [504.731076ms] END
	E0501 02:58:15.730056       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:58:15.730169       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:58:15.730071       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:58:15.731583       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:58:15.732500       1 timeout.go:142] post-timeout activity - time-elapsed: 2.647619ms, PATCH "/api/v1/namespaces/default/events/ha-136200-m03.17cb3e09c56bb983" result: <nil>
	E0501 02:59:25.456065       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61414: use of closed network connection
	E0501 02:59:26.016855       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61416: use of closed network connection
	E0501 02:59:26.743048       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61418: use of closed network connection
	E0501 02:59:27.423392       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61421: use of closed network connection
	E0501 02:59:28.036056       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61423: use of closed network connection
	E0501 02:59:28.618704       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61425: use of closed network connection
	E0501 02:59:29.166283       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61427: use of closed network connection
	E0501 02:59:29.771114       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61429: use of closed network connection
	E0501 02:59:30.328866       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61431: use of closed network connection
	E0501 02:59:31.360058       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61434: use of closed network connection
	E0501 02:59:41.926438       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61436: use of closed network connection
	E0501 02:59:42.497809       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61439: use of closed network connection
	E0501 02:59:53.089743       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61441: use of closed network connection
	E0501 02:59:53.660135       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61443: use of closed network connection
	E0501 03:00:04.225188       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61445: use of closed network connection
	
	
	==> kube-controller-manager [8fa3aa565b36] <==
	I0501 02:50:58.734842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.702µs"
	I0501 02:50:58.815553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.110569ms"
	I0501 02:50:58.817069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="234.005µs"
	I0501 02:50:58.859853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.315916ms"
	I0501 02:50:58.862248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="191.304µs"
	I0501 02:54:21.439127       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m02\" does not exist"
	I0501 02:54:21.501101       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m02" podCIDRs=["10.244.1.0/24"]
	I0501 02:54:21.914883       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m02"
	I0501 02:58:14.901209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m03\" does not exist"
	I0501 02:58:14.933592       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m03" podCIDRs=["10.244.2.0/24"]
	I0501 02:58:16.990389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m03"
	I0501 02:59:18.914466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.158562ms"
	I0501 02:59:19.095324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.785078ms"
	I0501 02:59:19.461767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="365.331283ms"
	I0501 02:59:19.490263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.541695ms"
	I0501 02:59:19.490899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.9µs"
	I0501 02:59:21.446166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0501 02:59:21.996495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.097772ms"
	I0501 02:59:21.997082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.301µs"
	I0501 02:59:22.122170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.415164ms"
	I0501 02:59:22.122332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.3µs"
	I0501 02:59:22.485058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.861489ms"
	I0501 02:59:22.485150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.8µs"
	I0501 03:07:47.413030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.476887ms"
	I0501 03:07:47.413260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.901µs"
	
	
	==> kube-proxy [562cd55ab970] <==
	I0501 02:50:44.069527       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:50:44.111745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.217.218"]
	I0501 02:50:44.171562       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:50:44.171703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:50:44.171730       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:50:44.178320       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:50:44.180232       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:50:44.180271       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:50:44.184544       1 config.go:192] "Starting service config controller"
	I0501 02:50:44.185913       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:50:44.186319       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:50:44.186555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:50:44.189915       1 config.go:319] "Starting node config controller"
	I0501 02:50:44.190110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:50:44.287624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:50:44.287761       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:50:44.290292       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6454ceb34ca] <==
	W0501 02:50:26.797411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:50:26.797624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:50:26.830216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.830267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:26.925545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:50:26.925605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:50:26.948130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.948245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.027771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:27.028119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.045542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:50:27.045577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:50:27.049002       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:50:27.049031       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:50:30.148462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 02:59:18.858485       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pc6wt" node="ha-136200-m03"
	E0501 02:59:18.859227       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" pod="default/busybox-fc5497c4f-pc6wt"
	E0501 02:59:18.932248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.932355       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10f52d20-5605-40b5-8875-ceb0cb5c2e53(default/busybox-fc5497c4f-6mlkh) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6mlkh"
	E0501 02:59:18.932383       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" pod="default/busybox-fc5497c4f-6mlkh"
	I0501 02:59:18.932412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.934021       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	E0501 02:59:18.934194       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b6febdff-c378-4d33-94ae-8b321071f921(default/busybox-fc5497c4f-2gr4g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-2gr4g"
	E0501 02:59:18.934386       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" pod="default/busybox-fc5497c4f-2gr4g"
	I0501 02:59:18.937753       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	
	
	==> kubelet <==
	May 01 03:04:29 ha-136200 kubelet[2230]: E0501 03:04:29.306136    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:04:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:04:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:04:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:04:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:05:29 ha-136200 kubelet[2230]: E0501 03:05:29.306156    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:05:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:05:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:05:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:05:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:06:29 ha-136200 kubelet[2230]: E0501 03:06:29.306327    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:06:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:06:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:06:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:06:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:07:29 ha-136200 kubelet[2230]: E0501 03:07:29.306835    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:07:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:07:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:07:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:07:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:08:29 ha-136200 kubelet[2230]: E0501 03:08:29.308785    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:08:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:08:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:08:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:08:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:08:23.973130    2484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200
E0501 03:08:38.009606   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200: (12.8376429s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-136200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (111.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (315.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 node start m02 -v=7 --alsologtostderr
E0501 03:11:18.211537   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:11:34.964622   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-136200 node start m02 -v=7 --alsologtostderr: exit status 90 (3m1.3215144s)

                                                
                                                
-- stdout --
	* Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	* Restarting existing hyperv VM for "ha-136200-m02" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:09:09.180152   14328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:09:09.265625   14328 out.go:291] Setting OutFile to fd 988 ...
	I0501 03:09:09.284966   14328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:09:09.284966   14328 out.go:304] Setting ErrFile to fd 876...
	I0501 03:09:09.285073   14328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:09:09.302490   14328 mustload.go:65] Loading cluster: ha-136200
	I0501 03:09:09.303468   14328 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:09:09.304473   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:11.382729   14328 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:09:11.382729   14328 main.go:141] libmachine: [stderr =====>] : 
	W0501 03:09:11.382729   14328 host.go:58] "ha-136200-m02" host status: Stopped
	I0501 03:09:11.389316   14328 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 03:09:11.391919   14328 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 03:09:11.392546   14328 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 03:09:11.392546   14328 cache.go:56] Caching tarball of preloaded images
	I0501 03:09:11.393068   14328 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 03:09:11.393280   14328 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 03:09:11.393374   14328 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 03:09:11.395932   14328 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:09:11.395932   14328 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 03:09:11.395932   14328 start.go:96] Skipping create...Using existing machine configuration
	I0501 03:09:11.396460   14328 fix.go:54] fixHost starting: m02
	I0501 03:09:11.396528   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:13.538906   14328 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:09:13.539708   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:13.539708   14328 fix.go:112] recreateIfNeeded on ha-136200-m02: state=Stopped err=<nil>
	W0501 03:09:13.539778   14328 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 03:09:13.542548   14328 out.go:177] * Restarting existing hyperv VM for "ha-136200-m02" ...
	I0501 03:09:13.544988   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 03:09:16.673518   14328 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:09:16.674542   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:16.674578   14328 main.go:141] libmachine: Waiting for host to start...
	I0501 03:09:16.674578   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:18.996266   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:18.996266   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:18.996266   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:21.630878   14328 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:09:21.630940   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:22.632981   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:24.852192   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:24.852192   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:24.852192   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:27.471038   14328 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:09:27.471038   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:28.483266   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:30.725576   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:30.725576   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:30.725576   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:33.320013   14328 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:09:33.320349   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:34.329154   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:36.526977   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:36.526977   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:36.526977   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:39.105307   14328 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:09:39.105307   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:40.106804   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:42.334534   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:42.334631   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:42.334631   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:44.994835   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:09:44.994835   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:44.997930   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:47.172365   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:47.173045   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:47.173045   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:49.796779   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:09:49.796779   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:49.797992   14328 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 03:09:49.800724   14328 machine.go:94] provisionDockerMachine start ...
	I0501 03:09:49.800866   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:51.961283   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:51.961283   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:51.961561   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:54.580389   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:09:54.580528   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:54.587602   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:09:54.588299   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:09:54.588299   14328 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:09:54.718800   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:09:54.718951   14328 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 03:09:54.718951   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:09:56.920014   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:09:56.920014   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:56.920428   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:09:59.512451   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:09:59.512451   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:09:59.519157   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:09:59.519890   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:09:59.519890   14328 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 03:09:59.671324   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 03:09:59.671324   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:01.816303   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:01.816303   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:01.816303   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:04.396797   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:04.396797   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:04.403587   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:10:04.403996   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:10:04.403996   14328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:10:04.561109   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:10:04.561109   14328 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 03:10:04.561109   14328 buildroot.go:174] setting up certificates
	I0501 03:10:04.561109   14328 provision.go:84] configureAuth start
	I0501 03:10:04.561698   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:06.715410   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:06.715653   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:06.715744   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:09.316807   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:09.316919   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:09.316919   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:11.463228   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:11.464007   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:11.464178   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:14.109598   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:14.109598   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:14.110091   14328 provision.go:143] copyHostCerts
	I0501 03:10:14.110237   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 03:10:14.110407   14328 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 03:10:14.110407   14328 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 03:10:14.111210   14328 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 03:10:14.112511   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 03:10:14.112804   14328 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 03:10:14.112953   14328 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 03:10:14.113151   14328 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 03:10:14.114275   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 03:10:14.114552   14328 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 03:10:14.114583   14328 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 03:10:14.114614   14328 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 03:10:14.115523   14328 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.221.64 ha-136200-m02 localhost minikube]
	I0501 03:10:14.348473   14328 provision.go:177] copyRemoteCerts
	I0501 03:10:14.363221   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:10:14.363221   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:16.542483   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:16.542483   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:16.542483   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:19.210348   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:19.210429   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:19.210880   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:10:19.320052   14328 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9567292s)
	I0501 03:10:19.320052   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 03:10:19.320670   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:10:19.374012   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 03:10:19.374538   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 03:10:19.428078   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 03:10:19.428078   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 03:10:19.481753   14328 provision.go:87] duration metric: took 14.9205321s to configureAuth
	I0501 03:10:19.481753   14328 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:10:19.482774   14328 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:10:19.482774   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:21.638648   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:21.639128   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:21.639128   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:24.286591   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:24.286591   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:24.292816   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:10:24.293356   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:10:24.293356   14328 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 03:10:24.420456   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 03:10:24.420456   14328 buildroot.go:70] root file system type: tmpfs
	I0501 03:10:24.421571   14328 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 03:10:24.421966   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:26.597164   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:26.598098   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:26.598229   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:29.232817   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:29.232817   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:29.239579   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:10:29.240306   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:10:29.240306   14328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 03:10:29.408977   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 03:10:29.409123   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:31.577363   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:31.577363   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:31.577363   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:34.176649   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:34.176891   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:34.185049   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:10:34.186240   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:10:34.186240   14328 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 03:10:36.804973   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 03:10:36.804973   14328 machine.go:97] duration metric: took 47.0038338s to provisionDockerMachine
	I0501 03:10:36.805081   14328 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 03:10:36.805081   14328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:10:36.819948   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:10:36.819948   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:38.971944   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:38.971944   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:38.972368   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:41.631842   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:41.632510   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:41.633168   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:10:41.746902   14328 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9269165s)
	I0501 03:10:41.763148   14328 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:10:41.771154   14328 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:10:41.771154   14328 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 03:10:41.771778   14328 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 03:10:41.772998   14328 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 03:10:41.773150   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 03:10:41.789577   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:10:41.810272   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 03:10:41.867449   14328 start.go:296] duration metric: took 5.0623304s for postStartSetup
	I0501 03:10:41.884671   14328 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0501 03:10:41.884671   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:44.056082   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:44.056135   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:44.056135   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:46.745669   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:46.745669   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:46.746763   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:10:46.865651   14328 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.9809428s)
	I0501 03:10:46.865765   14328 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0501 03:10:46.879475   14328 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0501 03:10:46.958022   14328 fix.go:56] duration metric: took 1m35.5608452s for fixHost
	I0501 03:10:46.958174   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:49.114007   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:49.114007   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:49.114440   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:51.790327   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:51.790327   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:51.796683   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:10:51.797094   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:10:51.797094   14328 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 03:10:51.926038   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714533051.921420814
	
	I0501 03:10:51.926112   14328 fix.go:216] guest clock: 1714533051.921420814
	I0501 03:10:51.926112   14328 fix.go:229] Guest: 2024-05-01 03:10:51.921420814 +0000 UTC Remote: 2024-05-01 03:10:46.9581742 +0000 UTC m=+97.885286701 (delta=4.963246614s)
	I0501 03:10:51.926254   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:54.098200   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:54.098200   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:54.098859   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:10:56.739928   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:10:56.739928   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:56.746442   14328 main.go:141] libmachine: Using SSH client type: native
	I0501 03:10:56.747230   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:10:56.747230   14328 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714533051
	I0501 03:10:56.905944   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 03:10:51 UTC 2024
	
	I0501 03:10:56.906031   14328 fix.go:236] clock set: Wed May  1 03:10:51 UTC 2024
	 (err=<nil>)
	I0501 03:10:56.906031   14328 start.go:83] releasing machines lock for "ha-136200-m02", held for 1m45.509308s
	I0501 03:10:56.906335   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:10:59.038924   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:10:59.038924   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:10:59.038924   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:11:01.722212   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:11:01.722295   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:11:01.728036   14328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:11:01.728168   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:11:01.741977   14328 ssh_runner.go:195] Run: systemctl --version
	I0501 03:11:01.741977   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:11:04.079632   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:11:04.079632   14328 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:11:04.079632   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:11:04.079632   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:11:04.079632   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:11:04.079632   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:11:06.799803   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:11:06.799803   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:11:06.800954   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:11:06.829812   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:11:06.829812   14328 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:11:06.830256   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:11:06.896563   14328 ssh_runner.go:235] Completed: systemctl --version: (5.1545465s)
	I0501 03:11:06.910440   14328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 03:11:07.018429   14328 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2903537s)
	W0501 03:11:07.018429   14328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:11:07.033482   14328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:11:07.066471   14328 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:11:07.066603   14328 start.go:494] detecting cgroup driver to use...
	I0501 03:11:07.066959   14328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:11:07.130367   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 03:11:07.165074   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 03:11:07.186833   14328 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 03:11:07.199999   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 03:11:07.237815   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:11:07.286239   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 03:11:07.318189   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:11:07.360994   14328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:11:07.400052   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 03:11:07.437550   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 03:11:07.477338   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 03:11:07.516876   14328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:11:07.553784   14328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:11:07.590691   14328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:11:07.834580   14328 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 03:11:07.871016   14328 start.go:494] detecting cgroup driver to use...
	I0501 03:11:07.886184   14328 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 03:11:07.930542   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:11:07.967897   14328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:11:08.023315   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:11:08.066397   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:11:08.112784   14328 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 03:11:08.183336   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:11:08.213725   14328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:11:08.264439   14328 ssh_runner.go:195] Run: which cri-dockerd
	I0501 03:11:08.283437   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 03:11:08.303461   14328 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 03:11:08.349733   14328 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 03:11:08.585195   14328 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 03:11:08.821748   14328 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 03:11:08.822062   14328 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 03:11:08.872940   14328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:11:09.096269   14328 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 03:12:10.253000   14328 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1562724s)
	I0501 03:12:10.267012   14328 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0501 03:12:10.306767   14328 out.go:177] 
	W0501 03:12:10.308733   14328 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 03:10:34 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.865467463Z" level=info msg="Starting up"
	May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.867757945Z" level=info msg="containerd not running, starting managed containerd"
	May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.871893211Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.911064995Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942624741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942746140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942834439Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942854339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943573933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943690932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943938730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944116529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944142628Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944155628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944791323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.945594517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948739191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948874690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949095289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949197388Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949757983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949888182Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949908782Z" level=info msg="metadata content store policy set" policy=shared
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968841429Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968999528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969191826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969333525Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969359525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969452524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970622815Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970823813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970906913Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971131311Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971214310Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971295609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971421608Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971515208Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971608107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971679206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971737606Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971796305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971994104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972110503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972187202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972520800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972733898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972943996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973091395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973171894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973232394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973349093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973371793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973388693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973404792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973430792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973459992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973475992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973491692Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973554191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973594591Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973608691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973622291Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973689790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973708190Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973720590Z" level=info msg="NRI interface is disabled by configuration."
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974065087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974149686Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974200886Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974273485Z" level=info msg="containerd successfully booted in 0.066086s"
	May 01 03:10:35 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:35.934441781Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.177932424Z" level=info msg="Loading containers: start."
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.601139506Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.692838705Z" level=info msg="Loading containers: done."
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728088050Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728877947Z" level=info msg="Daemon has completed initialization"
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797360247Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 03:10:36 ha-136200-m02 systemd[1]: Started Docker Application Container Engine.
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797678346Z" level=info msg="API listen on [::]:2376"
	May 01 03:11:09 ha-136200-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.126942194Z" level=info msg="Processing signal 'terminated'"
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.129527601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130085002Z" level=info msg="Daemon shutdown complete"
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130144502Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130211703Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 03:11:10 ha-136200-m02 systemd[1]: docker.service: Deactivated successfully.
	May 01 03:11:10 ha-136200-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 01 03:11:10 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:11:10 ha-136200-m02 dockerd[1326]: time="2024-05-01T03:11:10.219581053Z" level=info msg="Starting up"
	May 01 03:12:10 ha-136200-m02 dockerd[1326]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 03:12:10 ha-136200-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 03:10:34 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.865467463Z" level=info msg="Starting up"
	May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.867757945Z" level=info msg="containerd not running, starting managed containerd"
	May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.871893211Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.911064995Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942624741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942746140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942834439Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942854339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943573933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943690932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943938730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944116529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944142628Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944155628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944791323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.945594517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948739191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948874690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949095289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949197388Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949757983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949888182Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949908782Z" level=info msg="metadata content store policy set" policy=shared
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968841429Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968999528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969191826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969333525Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969359525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969452524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970622815Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970823813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970906913Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971131311Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971214310Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971295609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971421608Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971515208Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971608107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971679206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971737606Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971796305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971994104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972110503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972187202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972520800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972733898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972943996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973091395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973171894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973232394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973349093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973371793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973388693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973404792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973430792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973459992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973475992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973491692Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973554191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973594591Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973608691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973622291Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973689790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973708190Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973720590Z" level=info msg="NRI interface is disabled by configuration."
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974065087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974149686Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974200886Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974273485Z" level=info msg="containerd successfully booted in 0.066086s"
	May 01 03:10:35 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:35.934441781Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.177932424Z" level=info msg="Loading containers: start."
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.601139506Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.692838705Z" level=info msg="Loading containers: done."
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728088050Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728877947Z" level=info msg="Daemon has completed initialization"
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797360247Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 03:10:36 ha-136200-m02 systemd[1]: Started Docker Application Container Engine.
	May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797678346Z" level=info msg="API listen on [::]:2376"
	May 01 03:11:09 ha-136200-m02 systemd[1]: Stopping Docker Application Container Engine...
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.126942194Z" level=info msg="Processing signal 'terminated'"
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.129527601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130085002Z" level=info msg="Daemon shutdown complete"
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130144502Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130211703Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 03:11:10 ha-136200-m02 systemd[1]: docker.service: Deactivated successfully.
	May 01 03:11:10 ha-136200-m02 systemd[1]: Stopped Docker Application Container Engine.
	May 01 03:11:10 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
	May 01 03:11:10 ha-136200-m02 dockerd[1326]: time="2024-05-01T03:11:10.219581053Z" level=info msg="Starting up"
	May 01 03:12:10 ha-136200-m02 dockerd[1326]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 03:12:10 ha-136200-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0501 03:12:10.309742   14328 out.go:239] * 
	* 
	W0501 03:12:10.338673   14328 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_babbccc057e1a5fb655cbe6b9dc774ebbe7e14cc_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_babbccc057e1a5fb655cbe6b9dc774ebbe7e14cc_0.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 03:12:10.340858   14328 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: W0501 03:09:09.180152   14328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0501 03:09:09.265625   14328 out.go:291] Setting OutFile to fd 988 ...
I0501 03:09:09.284966   14328 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 03:09:09.284966   14328 out.go:304] Setting ErrFile to fd 876...
I0501 03:09:09.285073   14328 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 03:09:09.302490   14328 mustload.go:65] Loading cluster: ha-136200
I0501 03:09:09.303468   14328 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 03:09:09.304473   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:11.382729   14328 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0501 03:09:11.382729   14328 main.go:141] libmachine: [stderr =====>] : 
W0501 03:09:11.382729   14328 host.go:58] "ha-136200-m02" host status: Stopped
I0501 03:09:11.389316   14328 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
I0501 03:09:11.391919   14328 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0501 03:09:11.392546   14328 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0501 03:09:11.392546   14328 cache.go:56] Caching tarball of preloaded images
I0501 03:09:11.393068   14328 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0501 03:09:11.393280   14328 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0501 03:09:11.393374   14328 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
I0501 03:09:11.395932   14328 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0501 03:09:11.395932   14328 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
I0501 03:09:11.395932   14328 start.go:96] Skipping create...Using existing machine configuration
I0501 03:09:11.396460   14328 fix.go:54] fixHost starting: m02
I0501 03:09:11.396528   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:13.538906   14328 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0501 03:09:13.539708   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:13.539708   14328 fix.go:112] recreateIfNeeded on ha-136200-m02: state=Stopped err=<nil>
W0501 03:09:13.539778   14328 fix.go:138] unexpected machine state, will restart: <nil>
I0501 03:09:13.542548   14328 out.go:177] * Restarting existing hyperv VM for "ha-136200-m02" ...
I0501 03:09:13.544988   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
I0501 03:09:16.673518   14328 main.go:141] libmachine: [stdout =====>] : 
I0501 03:09:16.674542   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:16.674578   14328 main.go:141] libmachine: Waiting for host to start...
I0501 03:09:16.674578   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:18.996266   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:18.996266   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:18.996266   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:21.630878   14328 main.go:141] libmachine: [stdout =====>] : 
I0501 03:09:21.630940   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:22.632981   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:24.852192   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:24.852192   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:24.852192   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:27.471038   14328 main.go:141] libmachine: [stdout =====>] : 
I0501 03:09:27.471038   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:28.483266   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:30.725576   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:30.725576   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:30.725576   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:33.320013   14328 main.go:141] libmachine: [stdout =====>] : 
I0501 03:09:33.320349   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:34.329154   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:36.526977   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:36.526977   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:36.526977   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:39.105307   14328 main.go:141] libmachine: [stdout =====>] : 
I0501 03:09:39.105307   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:40.106804   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:42.334534   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:42.334631   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:42.334631   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:44.994835   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:09:44.994835   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:44.997930   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:47.172365   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:47.173045   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:47.173045   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:49.796779   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:09:49.796779   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:49.797992   14328 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
I0501 03:09:49.800724   14328 machine.go:94] provisionDockerMachine start ...
I0501 03:09:49.800866   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:51.961283   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:51.961283   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:51.961561   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:54.580389   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:09:54.580528   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:54.587602   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:09:54.588299   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:09:54.588299   14328 main.go:141] libmachine: About to run SSH command:
hostname
I0501 03:09:54.718800   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0501 03:09:54.718951   14328 buildroot.go:166] provisioning hostname "ha-136200-m02"
I0501 03:09:54.718951   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:09:56.920014   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:09:56.920014   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:56.920428   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:09:59.512451   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:09:59.512451   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:09:59.519157   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:09:59.519890   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:09:59.519890   14328 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
I0501 03:09:59.671324   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02

                                                
                                                
I0501 03:09:59.671324   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:01.816303   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:01.816303   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:01.816303   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:04.396797   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:04.396797   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:04.403587   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:10:04.403996   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:10:04.403996   14328 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0501 03:10:04.561109   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0501 03:10:04.561109   14328 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0501 03:10:04.561109   14328 buildroot.go:174] setting up certificates
I0501 03:10:04.561109   14328 provision.go:84] configureAuth start
I0501 03:10:04.561698   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:06.715410   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:06.715653   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:06.715744   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:09.316807   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:09.316919   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:09.316919   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:11.463228   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:11.464007   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:11.464178   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:14.109598   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:14.109598   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:14.110091   14328 provision.go:143] copyHostCerts
I0501 03:10:14.110237   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
I0501 03:10:14.110407   14328 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0501 03:10:14.110407   14328 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0501 03:10:14.111210   14328 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
I0501 03:10:14.112511   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
I0501 03:10:14.112804   14328 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0501 03:10:14.112953   14328 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0501 03:10:14.113151   14328 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0501 03:10:14.114275   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
I0501 03:10:14.114552   14328 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0501 03:10:14.114583   14328 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0501 03:10:14.114614   14328 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
I0501 03:10:14.115523   14328 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.221.64 ha-136200-m02 localhost minikube]
I0501 03:10:14.348473   14328 provision.go:177] copyRemoteCerts
I0501 03:10:14.363221   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0501 03:10:14.363221   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:16.542483   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:16.542483   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:16.542483   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:19.210348   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:19.210429   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:19.210880   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
I0501 03:10:19.320052   14328 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9567292s)
I0501 03:10:19.320052   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0501 03:10:19.320670   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0501 03:10:19.374012   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0501 03:10:19.374538   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0501 03:10:19.428078   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0501 03:10:19.428078   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0501 03:10:19.481753   14328 provision.go:87] duration metric: took 14.9205321s to configureAuth
I0501 03:10:19.481753   14328 buildroot.go:189] setting minikube options for container-runtime
I0501 03:10:19.482774   14328 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 03:10:19.482774   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:21.638648   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:21.639128   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:21.639128   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:24.286591   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:24.286591   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:24.292816   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:10:24.293356   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:10:24.293356   14328 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0501 03:10:24.420456   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0501 03:10:24.420456   14328 buildroot.go:70] root file system type: tmpfs
I0501 03:10:24.421571   14328 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0501 03:10:24.421966   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:26.597164   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:26.598098   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:26.598229   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:29.232817   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:29.232817   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:29.239579   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:10:29.240306   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:10:29.240306   14328 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0501 03:10:29.408977   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0501 03:10:29.409123   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:31.577363   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:31.577363   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:31.577363   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:34.176649   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:34.176891   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:34.185049   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:10:34.186240   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:10:34.186240   14328 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0501 03:10:36.804973   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0501 03:10:36.804973   14328 machine.go:97] duration metric: took 47.0038338s to provisionDockerMachine
I0501 03:10:36.805081   14328 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
I0501 03:10:36.805081   14328 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0501 03:10:36.819948   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0501 03:10:36.819948   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:38.971944   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:38.971944   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:38.972368   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:41.631842   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:41.632510   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:41.633168   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
I0501 03:10:41.746902   14328 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9269165s)
I0501 03:10:41.763148   14328 ssh_runner.go:195] Run: cat /etc/os-release
I0501 03:10:41.771154   14328 info.go:137] Remote host: Buildroot 2023.02.9
I0501 03:10:41.771154   14328 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0501 03:10:41.771778   14328 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0501 03:10:41.772998   14328 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
I0501 03:10:41.773150   14328 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
I0501 03:10:41.789577   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0501 03:10:41.810272   14328 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
I0501 03:10:41.867449   14328 start.go:296] duration metric: took 5.0623304s for postStartSetup
I0501 03:10:41.884671   14328 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0501 03:10:41.884671   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:44.056082   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:44.056135   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:44.056135   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:46.745669   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:46.745669   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:46.746763   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
I0501 03:10:46.865651   14328 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.9809428s)
I0501 03:10:46.865765   14328 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0501 03:10:46.879475   14328 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0501 03:10:46.958022   14328 fix.go:56] duration metric: took 1m35.5608452s for fixHost
I0501 03:10:46.958174   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:49.114007   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:49.114007   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:49.114440   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:51.790327   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:51.790327   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:51.796683   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:10:51.797094   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:10:51.797094   14328 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0501 03:10:51.926038   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714533051.921420814

                                                
                                                
I0501 03:10:51.926112   14328 fix.go:216] guest clock: 1714533051.921420814
I0501 03:10:51.926112   14328 fix.go:229] Guest: 2024-05-01 03:10:51.921420814 +0000 UTC Remote: 2024-05-01 03:10:46.9581742 +0000 UTC m=+97.885286701 (delta=4.963246614s)
I0501 03:10:51.926254   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:54.098200   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:54.098200   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:54.098859   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:10:56.739928   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:10:56.739928   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:56.746442   14328 main.go:141] libmachine: Using SSH client type: native
I0501 03:10:56.747230   14328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
I0501 03:10:56.747230   14328 main.go:141] libmachine: About to run SSH command:
sudo date -s @1714533051
I0501 03:10:56.905944   14328 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 03:10:51 UTC 2024

                                                
                                                
I0501 03:10:56.906031   14328 fix.go:236] clock set: Wed May  1 03:10:51 UTC 2024
(err=<nil>)
I0501 03:10:56.906031   14328 start.go:83] releasing machines lock for "ha-136200-m02", held for 1m45.509308s
I0501 03:10:56.906335   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:10:59.038924   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:10:59.038924   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:10:59.038924   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:11:01.722212   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:11:01.722295   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:11:01.728036   14328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0501 03:11:01.728168   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:11:01.741977   14328 ssh_runner.go:195] Run: systemctl --version
I0501 03:11:01.741977   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
I0501 03:11:04.079632   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:11:04.079632   14328 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 03:11:04.079632   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:11:04.079632   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:11:04.079632   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:11:04.079632   14328 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
I0501 03:11:06.799803   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:11:06.799803   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:11:06.800954   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
I0501 03:11:06.829812   14328 main.go:141] libmachine: [stdout =====>] : 172.28.221.64

                                                
                                                
I0501 03:11:06.829812   14328 main.go:141] libmachine: [stderr =====>] : 
I0501 03:11:06.830256   14328 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
I0501 03:11:06.896563   14328 ssh_runner.go:235] Completed: systemctl --version: (5.1545465s)
I0501 03:11:06.910440   14328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0501 03:11:07.018429   14328 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2903537s)
W0501 03:11:07.018429   14328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0501 03:11:07.033482   14328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0501 03:11:07.066471   14328 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0501 03:11:07.066603   14328 start.go:494] detecting cgroup driver to use...
I0501 03:11:07.066959   14328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0501 03:11:07.130367   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0501 03:11:07.165074   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0501 03:11:07.186833   14328 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0501 03:11:07.199999   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0501 03:11:07.237815   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0501 03:11:07.286239   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0501 03:11:07.318189   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0501 03:11:07.360994   14328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0501 03:11:07.400052   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0501 03:11:07.437550   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0501 03:11:07.477338   14328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0501 03:11:07.516876   14328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0501 03:11:07.553784   14328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0501 03:11:07.590691   14328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0501 03:11:07.834580   14328 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0501 03:11:07.871016   14328 start.go:494] detecting cgroup driver to use...
I0501 03:11:07.886184   14328 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0501 03:11:07.930542   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0501 03:11:07.967897   14328 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0501 03:11:08.023315   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0501 03:11:08.066397   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0501 03:11:08.112784   14328 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0501 03:11:08.183336   14328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0501 03:11:08.213725   14328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0501 03:11:08.264439   14328 ssh_runner.go:195] Run: which cri-dockerd
I0501 03:11:08.283437   14328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0501 03:11:08.303461   14328 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0501 03:11:08.349733   14328 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0501 03:11:08.585195   14328 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0501 03:11:08.821748   14328 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0501 03:11:08.822062   14328 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0501 03:11:08.872940   14328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0501 03:11:09.096269   14328 ssh_runner.go:195] Run: sudo systemctl restart docker
I0501 03:12:10.253000   14328 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1562724s)
I0501 03:12:10.267012   14328 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0501 03:12:10.306767   14328 out.go:177] 
W0501 03:12:10.308733   14328 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
May 01 03:10:34 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.865467463Z" level=info msg="Starting up"
May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.867757945Z" level=info msg="containerd not running, starting managed containerd"
May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.871893211Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.911064995Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942624741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942746140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942834439Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942854339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943573933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943690932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943938730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944116529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944142628Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944155628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944791323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.945594517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948739191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948874690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949095289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949197388Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949757983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949888182Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949908782Z" level=info msg="metadata content store policy set" policy=shared
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968841429Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968999528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969191826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969333525Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969359525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969452524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970622815Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970823813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970906913Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971131311Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971214310Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971295609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971421608Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971515208Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971608107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971679206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971737606Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971796305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971994104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972110503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972187202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972520800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972733898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972943996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973091395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973171894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973232394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973349093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973371793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973388693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973404792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973430792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973459992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973475992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973491692Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973554191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973594591Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973608691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973622291Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973689790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973708190Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973720590Z" level=info msg="NRI interface is disabled by configuration."
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974065087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974149686Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974200886Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974273485Z" level=info msg="containerd successfully booted in 0.066086s"
May 01 03:10:35 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:35.934441781Z" level=info msg="[graphdriver] trying configured driver: overlay2"
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.177932424Z" level=info msg="Loading containers: start."
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.601139506Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.692838705Z" level=info msg="Loading containers: done."
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728088050Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728877947Z" level=info msg="Daemon has completed initialization"
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797360247Z" level=info msg="API listen on /var/run/docker.sock"
May 01 03:10:36 ha-136200-m02 systemd[1]: Started Docker Application Container Engine.
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797678346Z" level=info msg="API listen on [::]:2376"
May 01 03:11:09 ha-136200-m02 systemd[1]: Stopping Docker Application Container Engine...
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.126942194Z" level=info msg="Processing signal 'terminated'"
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.129527601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130085002Z" level=info msg="Daemon shutdown complete"
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130144502Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130211703Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
May 01 03:11:10 ha-136200-m02 systemd[1]: docker.service: Deactivated successfully.
May 01 03:11:10 ha-136200-m02 systemd[1]: Stopped Docker Application Container Engine.
May 01 03:11:10 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
May 01 03:11:10 ha-136200-m02 dockerd[1326]: time="2024-05-01T03:11:10.219581053Z" level=info msg="Starting up"
May 01 03:12:10 ha-136200-m02 dockerd[1326]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
May 01 03:12:10 ha-136200-m02 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
May 01 03:10:34 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.865467463Z" level=info msg="Starting up"
May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.867757945Z" level=info msg="containerd not running, starting managed containerd"
May 01 03:10:34 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:34.871893211Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.911064995Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942624741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942746140Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942834439Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.942854339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943573933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943690932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.943938730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944116529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944142628Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944155628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.944791323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.945594517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948739191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.948874690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949095289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949197388Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949757983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949888182Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.949908782Z" level=info msg="metadata content store policy set" policy=shared
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968841429Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.968999528Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969191826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969333525Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969359525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.969452524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970622815Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970823813Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.970906913Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971131311Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971214310Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971295609Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971421608Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971515208Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971608107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971679206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971737606Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971796305Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.971994104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972110503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972187202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972520800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972733898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.972943996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973091395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973171894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973232394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973349093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973371793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973388693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973404792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973430792Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973459992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973475992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973491692Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973554191Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973594591Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973608691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973622291Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973689790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973708190Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.973720590Z" level=info msg="NRI interface is disabled by configuration."
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974065087Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974149686Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974200886Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
May 01 03:10:34 ha-136200-m02 dockerd[669]: time="2024-05-01T03:10:34.974273485Z" level=info msg="containerd successfully booted in 0.066086s"
May 01 03:10:35 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:35.934441781Z" level=info msg="[graphdriver] trying configured driver: overlay2"
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.177932424Z" level=info msg="Loading containers: start."
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.601139506Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.692838705Z" level=info msg="Loading containers: done."
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728088050Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.728877947Z" level=info msg="Daemon has completed initialization"
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797360247Z" level=info msg="API listen on /var/run/docker.sock"
May 01 03:10:36 ha-136200-m02 systemd[1]: Started Docker Application Container Engine.
May 01 03:10:36 ha-136200-m02 dockerd[662]: time="2024-05-01T03:10:36.797678346Z" level=info msg="API listen on [::]:2376"
May 01 03:11:09 ha-136200-m02 systemd[1]: Stopping Docker Application Container Engine...
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.126942194Z" level=info msg="Processing signal 'terminated'"
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.129527601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130085002Z" level=info msg="Daemon shutdown complete"
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130144502Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
May 01 03:11:09 ha-136200-m02 dockerd[662]: time="2024-05-01T03:11:09.130211703Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
May 01 03:11:10 ha-136200-m02 systemd[1]: docker.service: Deactivated successfully.
May 01 03:11:10 ha-136200-m02 systemd[1]: Stopped Docker Application Container Engine.
May 01 03:11:10 ha-136200-m02 systemd[1]: Starting Docker Application Container Engine...
May 01 03:11:10 ha-136200-m02 dockerd[1326]: time="2024-05-01T03:11:10.219581053Z" level=info msg="Starting up"
May 01 03:12:10 ha-136200-m02 dockerd[1326]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 01 03:12:10 ha-136200-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
May 01 03:12:10 ha-136200-m02 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0501 03:12:10.309742   14328 out.go:239] * 
* 
W0501 03:12:10.338673   14328 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_babbccc057e1a5fb655cbe6b9dc774ebbe7e14cc_0.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                      │
│    * If the above advice does not help, please let us know:                                                          │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
│                                                                                                                      │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
│    * Please also attach the following file to the GitHub issue:                                                      │
│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_babbccc057e1a5fb655cbe6b9dc774ebbe7e14cc_0.log    │
│                                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0501 03:12:10.340858   14328 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-136200 node start m02 -v=7 --alsologtostderr": exit status 90
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr: exit status 2 (48.5838934s)

                                                
                                                
-- stdout --
	ha-136200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-136200-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-136200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-136200-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:12:10.931686   13964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:12:11.032335   13964 out.go:291] Setting OutFile to fd 916 ...
	I0501 03:12:11.032972   13964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:12:11.032972   13964 out.go:304] Setting ErrFile to fd 724...
	I0501 03:12:11.032972   13964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:12:11.050924   13964 out.go:298] Setting JSON to false
	I0501 03:12:11.050924   13964 mustload.go:65] Loading cluster: ha-136200
	I0501 03:12:11.050924   13964 notify.go:220] Checking for updates...
	I0501 03:12:11.051926   13964 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:12:11.051926   13964 status.go:255] checking status of ha-136200 ...
	I0501 03:12:11.052921   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:12:13.229381   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:13.229381   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:13.229381   13964 status.go:330] ha-136200 host status = "Running" (err=<nil>)
	I0501 03:12:13.229381   13964 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:12:13.230198   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:12:15.439614   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:15.439614   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:15.439690   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:18.145542   13964 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:12:18.146573   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:18.146597   13964 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:12:18.161922   13964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:12:18.161922   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:12:20.349992   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:20.350074   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:20.350134   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:22.970102   13964 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:12:22.970102   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:22.970747   13964 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 03:12:23.073235   13964 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9112764s)
	I0501 03:12:23.088711   13964 ssh_runner.go:195] Run: systemctl --version
	I0501 03:12:23.118045   13964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:12:23.148469   13964 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:12:23.149332   13964 api_server.go:166] Checking apiserver status ...
	I0501 03:12:23.164172   13964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:12:23.211457   13964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup
	W0501 03:12:23.238358   13964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:12:23.265921   13964 ssh_runner.go:195] Run: ls
	I0501 03:12:23.278940   13964 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:12:23.286545   13964 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:12:23.287469   13964 status.go:422] ha-136200 apiserver status = Running (err=<nil>)
	I0501 03:12:23.287469   13964 status.go:257] ha-136200 status: &{Name:ha-136200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:12:23.287530   13964 status.go:255] checking status of ha-136200-m02 ...
	I0501 03:12:23.288601   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:12:25.455462   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:25.455763   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:25.455763   13964 status.go:330] ha-136200-m02 host status = "Running" (err=<nil>)
	I0501 03:12:25.455763   13964 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:12:25.456682   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:12:27.671933   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:27.671933   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:27.672083   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:30.307264   13964 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:12:30.308171   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:30.308171   13964 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:12:30.322498   13964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:12:30.322498   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:12:32.522023   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:32.522023   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:32.522809   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:35.145181   13964 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:12:35.145308   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:35.145454   13964 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:12:35.237308   13964 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9147733s)
	I0501 03:12:35.252440   13964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:12:35.280995   13964 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:12:35.280995   13964 api_server.go:166] Checking apiserver status ...
	I0501 03:12:35.295486   13964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0501 03:12:35.323820   13964 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:12:35.323914   13964 status.go:422] ha-136200-m02 apiserver status = Stopped (err=<nil>)
	I0501 03:12:35.323914   13964 status.go:257] ha-136200-m02 status: &{Name:ha-136200-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:12:35.323999   13964 status.go:255] checking status of ha-136200-m03 ...
	I0501 03:12:35.324657   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:12:37.492438   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:37.493487   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:37.493487   13964 status.go:330] ha-136200-m03 host status = "Running" (err=<nil>)
	I0501 03:12:37.493612   13964 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:12:37.495060   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:12:39.730272   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:39.730272   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:39.730425   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:42.383033   13964 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:12:42.383033   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:42.384005   13964 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:12:42.398999   13964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:12:42.398999   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:12:44.522721   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:44.522788   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:44.522788   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:47.160872   13964 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:12:47.161109   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:47.161751   13964 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 03:12:47.257969   13964 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8589342s)
	I0501 03:12:47.274880   13964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:12:47.309337   13964 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:12:47.309411   13964 api_server.go:166] Checking apiserver status ...
	I0501 03:12:47.323421   13964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:12:47.369795   13964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup
	W0501 03:12:47.396784   13964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:12:47.409730   13964 ssh_runner.go:195] Run: ls
	I0501 03:12:47.418328   13964 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:12:47.428874   13964 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:12:47.429730   13964 status.go:422] ha-136200-m03 apiserver status = Running (err=<nil>)
	I0501 03:12:47.429730   13964 status.go:257] ha-136200-m03 status: &{Name:ha-136200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:12:47.429730   13964 status.go:255] checking status of ha-136200-m04 ...
	I0501 03:12:47.430323   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:12:49.580755   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:49.581733   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:49.581733   13964 status.go:330] ha-136200-m04 host status = "Running" (err=<nil>)
	I0501 03:12:49.582012   13964 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:12:49.582605   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:12:51.791373   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:51.791373   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:51.791373   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:54.432813   13964 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:12:54.433021   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:54.433021   13964 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:12:54.450516   13964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:12:54.450516   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:12:56.598618   13964 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:12:56.598679   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:56.598679   13964 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:12:59.191930   13964 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:12:59.191930   13964 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:12:59.192342   13964 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:12:59.294523   13964 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8439712s)
	I0501 03:12:59.311577   13964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:12:59.340599   13964 status.go:257] ha-136200-m04 status: &{Name:ha-136200-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr
E0501 03:13:38.002589   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr: exit status 2 (48.1716727s)

                                                
                                                
-- stdout --
	ha-136200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-136200-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-136200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-136200-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:13:00.890726    6476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:13:00.985078    6476 out.go:291] Setting OutFile to fd 548 ...
	I0501 03:13:00.985862    6476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:13:00.985862    6476 out.go:304] Setting ErrFile to fd 1020...
	I0501 03:13:00.985862    6476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:13:01.004174    6476 out.go:298] Setting JSON to false
	I0501 03:13:01.004269    6476 mustload.go:65] Loading cluster: ha-136200
	I0501 03:13:01.004350    6476 notify.go:220] Checking for updates...
	I0501 03:13:01.005334    6476 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:13:01.005334    6476 status.go:255] checking status of ha-136200 ...
	I0501 03:13:01.006500    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:13:03.149717    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:03.150440    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:03.150551    6476 status.go:330] ha-136200 host status = "Running" (err=<nil>)
	I0501 03:13:03.150551    6476 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:13:03.151315    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:13:05.417117    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:05.417117    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:05.417117    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:08.136628    6476 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:13:08.137208    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:08.137385    6476 host.go:66] Checking if "ha-136200" exists ...
	I0501 03:13:08.154502    6476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:13:08.154502    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:13:10.275535    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:10.275756    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:10.275866    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:12.847657    6476 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:13:12.848011    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:12.848589    6476 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 03:13:12.953479    6476 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7989418s)
	I0501 03:13:12.968842    6476 ssh_runner.go:195] Run: systemctl --version
	I0501 03:13:12.995943    6476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:13:13.026931    6476 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:13:13.027035    6476 api_server.go:166] Checking apiserver status ...
	I0501 03:13:13.042019    6476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:13:13.086716    6476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup
	W0501 03:13:13.107794    6476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:13:13.123776    6476 ssh_runner.go:195] Run: ls
	I0501 03:13:13.132638    6476 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:13:13.140233    6476 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:13:13.140233    6476 status.go:422] ha-136200 apiserver status = Running (err=<nil>)
	I0501 03:13:13.140233    6476 status.go:257] ha-136200 status: &{Name:ha-136200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:13:13.140233    6476 status.go:255] checking status of ha-136200-m02 ...
	I0501 03:13:13.141484    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:13:15.297165    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:15.297220    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:15.297220    6476 status.go:330] ha-136200-m02 host status = "Running" (err=<nil>)
	I0501 03:13:15.297220    6476 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:13:15.297917    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:13:17.518187    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:17.518187    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:17.518187    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:20.186032    6476 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:13:20.186149    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:20.186149    6476 host.go:66] Checking if "ha-136200-m02" exists ...
	I0501 03:13:20.202750    6476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:13:20.202750    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:13:22.377517    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:22.377517    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:22.378421    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:25.004471    6476 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:13:25.004471    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:25.004471    6476 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:13:25.108875    6476 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9060888s)
	I0501 03:13:25.125619    6476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:13:25.155870    6476 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:13:25.155984    6476 api_server.go:166] Checking apiserver status ...
	I0501 03:13:25.171546    6476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0501 03:13:25.196538    6476 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:13:25.196538    6476 status.go:422] ha-136200-m02 apiserver status = Stopped (err=<nil>)
	I0501 03:13:25.196657    6476 status.go:257] ha-136200-m02 status: &{Name:ha-136200-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:13:25.196697    6476 status.go:255] checking status of ha-136200-m03 ...
	I0501 03:13:25.197461    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:13:27.333705    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:27.333705    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:27.334380    6476 status.go:330] ha-136200-m03 host status = "Running" (err=<nil>)
	I0501 03:13:27.334380    6476 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:13:27.335485    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:13:29.546607    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:29.547026    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:29.547094    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:32.123701    6476 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:13:32.123701    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:32.123701    6476 host.go:66] Checking if "ha-136200-m03" exists ...
	I0501 03:13:32.139221    6476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:13:32.139221    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:13:34.241679    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:34.241758    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:34.241837    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:36.834676    6476 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:13:36.835677    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:36.837231    6476 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 03:13:36.937008    6476 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7976688s)
	I0501 03:13:36.951075    6476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:13:36.979648    6476 kubeconfig.go:125] found "ha-136200" server: "https://172.28.223.254:8443"
	I0501 03:13:36.979727    6476 api_server.go:166] Checking apiserver status ...
	I0501 03:13:36.993547    6476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:13:37.035074    6476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup
	W0501 03:13:37.057879    6476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2199/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 03:13:37.070851    6476 ssh_runner.go:195] Run: ls
	I0501 03:13:37.079242    6476 api_server.go:253] Checking apiserver healthz at https://172.28.223.254:8443/healthz ...
	I0501 03:13:37.086488    6476 api_server.go:279] https://172.28.223.254:8443/healthz returned 200:
	ok
	I0501 03:13:37.086488    6476 status.go:422] ha-136200-m03 apiserver status = Running (err=<nil>)
	I0501 03:13:37.087051    6476 status.go:257] ha-136200-m03 status: &{Name:ha-136200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:13:37.087051    6476 status.go:255] checking status of ha-136200-m04 ...
	I0501 03:13:37.087675    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:13:39.246398    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:39.246460    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:39.246460    6476 status.go:330] ha-136200-m04 host status = "Running" (err=<nil>)
	I0501 03:13:39.246460    6476 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:13:39.247439    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:13:41.427112    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:41.427112    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:41.427224    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:43.999410    6476 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:13:43.999410    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:43.999410    6476 host.go:66] Checking if "ha-136200-m04" exists ...
	I0501 03:13:44.015143    6476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 03:13:44.015143    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:13:46.163726    6476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:13:46.163920    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:46.164006    6476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:13:48.752637    6476 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:13:48.752637    6476 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:13:48.753769    6476 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:13:48.850609    6476 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8354297s)
	I0501 03:13:48.864685    6476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:13:48.892848    6476 status.go:257] ha-136200-m04 status: &{Name:ha-136200-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200: (12.364548s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 logs -n 25: (8.8882292s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p functional-869300                 | functional-869300 | minikube6\jenkins | v1.33.0 | 01 May 24 02:46 UTC | 01 May 24 02:47 UTC |
	| start   | -p ha-136200 --wait=true             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:47 UTC | 01 May 24 02:58 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- apply -f             | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- rollout status       | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-2gr4g -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-6mlkh -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-pc6wt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |                   |                   |         |                     |                     |
	| node    | add -p ha-136200 -v=7                | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:00 UTC |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| node    | ha-136200 node stop m02 -v=7         | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:06 UTC | 01 May 24 03:07 UTC |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| node    | ha-136200 node start m02 -v=7        | ha-136200         | minikube6\jenkins | v1.33.0 | 01 May 24 03:09 UTC |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:47:19
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:47:19.308853    4712 out.go:291] Setting OutFile to fd 968 ...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.308853    4712 out.go:304] Setting ErrFile to fd 940...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.335053    4712 out.go:298] Setting JSON to false
	I0501 02:47:19.338050    4712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104693,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:47:19.338050    4712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:47:19.343676    4712 out.go:177] * [ha-136200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:47:19.347056    4712 notify.go:220] Checking for updates...
	I0501 02:47:19.349570    4712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:47:19.352627    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:47:19.356010    4712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:47:19.359527    4712 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:47:19.364982    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:47:19.368342    4712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:47:24.771909    4712 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:47:24.777503    4712 start.go:297] selected driver: hyperv
	I0501 02:47:24.777503    4712 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:47:24.777503    4712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:47:24.830749    4712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:47:24.832155    4712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:47:24.832679    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:47:24.832679    4712 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:47:24.832679    4712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:47:24.832944    4712 start.go:340] cluster config:
	{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:47:24.832944    4712 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:47:24.837439    4712 out.go:177] * Starting "ha-136200" primary control-plane node in "ha-136200" cluster
	I0501 02:47:24.839631    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:47:24.839631    4712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:47:24.839631    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:47:24.840411    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:47:24.840411    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:47:24.841147    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:47:24.841147    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json: {Name:mk622c10e63d8ff69d285ce96c3e57bfbed6a54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:360] acquireMachinesLock for ha-136200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200"
	I0501 02:47:24.843334    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:47:24.843334    4712 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:47:24.845982    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:47:24.845982    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:47:24.845982    4712 client.go:168] LocalClient.Create starting
	I0501 02:47:24.847217    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.847705    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:47:24.848012    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.848048    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.848190    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:47:27.058462    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:47:27.058678    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:27.058786    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:30.441172    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:34.074968    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:34.075096    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:34.077782    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:47:34.612276    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: Creating VM...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:37.663991    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:37.664390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:37.664515    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:47:37.664599    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:39.498071    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:39.498234    4712 main.go:141] libmachine: Creating VHD
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:47:43.230384    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2B9E163F-083E-4714-9C44-9A52BE438E53
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:47:43.231369    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:43.231468    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:47:43.231550    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:47:43.241482    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -SizeBytes 20000MB
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:48.971981    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:47:52.766292    4712 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-136200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:47:52.766504    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:52.766592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200 -DynamicMemoryEnabled $false
	I0501 02:47:54.972628    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200 -Count 2
	I0501 02:47:57.167635    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\boot2docker.iso'
	I0501 02:47:59.728585    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd'
	I0501 02:48:02.387014    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: Starting VM...
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:07.690543    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:11.244005    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:13.447532    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:17.014251    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:19.231015    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:22.791035    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:24.970362    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:24.970583    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:24.970826    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:27.538082    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:27.539108    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:28.540044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:30.692065    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:33.315400    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:35.454723    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:48:35.454940    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:37.590850    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:37.591294    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:37.591378    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:40.152942    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:40.153017    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:40.158939    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:40.170076    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:40.170076    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:48:40.311850    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:48:40.311938    4712 buildroot.go:166] provisioning hostname "ha-136200"
	I0501 02:48:40.312011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:42.388241    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:44.941487    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:44.942306    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:44.948681    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:44.949642    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:44.949718    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200 && echo "ha-136200" | sudo tee /etc/hostname
	I0501 02:48:45.123416    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200
	
	I0501 02:48:45.123490    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:47.248892    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:49.920164    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:49.920164    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:49.920749    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:48:50.089597    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:48:50.089597    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:48:50.089597    4712 buildroot.go:174] setting up certificates
	I0501 02:48:50.090153    4712 provision.go:84] configureAuth start
	I0501 02:48:50.090240    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:54.811881    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:59.487351    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:59.487402    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:59.487402    4712 provision.go:143] copyHostCerts
	I0501 02:48:59.487402    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:48:59.487402    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:48:59.487402    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:48:59.488365    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:48:59.489448    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:48:59.489632    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:48:59.490981    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:48:59.491187    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:48:59.492726    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200 san=[127.0.0.1 172.28.217.218 ha-136200 localhost minikube]
	I0501 02:48:59.577887    4712 provision.go:177] copyRemoteCerts
	I0501 02:48:59.596375    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:48:59.597286    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:01.699540    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:04.259427    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:04.371852    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7744315s)
	I0501 02:49:04.371852    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:49:04.371852    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:49:04.422302    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:49:04.422602    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:49:04.478176    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:49:04.478176    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:49:04.532091    4712 provision.go:87] duration metric: took 14.4416362s to configureAuth
	I0501 02:49:04.532091    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:49:04.532690    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:49:04.532690    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:06.624197    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:09.238280    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:09.238979    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:09.245381    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:09.246060    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:09.246060    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:49:09.397759    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:49:09.397835    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:49:09.398290    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:49:09.398464    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:11.514582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:14.057033    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:14.057033    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:14.057589    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:49:14.242724    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:49:14.242724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:16.392749    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:18.993701    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:18.994302    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:19.000048    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:19.000537    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:19.000616    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:49:21.256124    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:49:21.256675    4712 machine.go:97] duration metric: took 45.8016127s to provisionDockerMachine
	I0501 02:49:21.256675    4712 client.go:171] duration metric: took 1m56.4098314s to LocalClient.Create
	I0501 02:49:21.256737    4712 start.go:167] duration metric: took 1m56.4098939s to libmachine.API.Create "ha-136200"
	I0501 02:49:21.256791    4712 start.go:293] postStartSetup for "ha-136200" (driver="hyperv")
	I0501 02:49:21.256828    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:49:21.271031    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:49:21.271031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:23.374454    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:23.374634    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:23.374716    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:25.919441    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:26.030251    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.759185s)
	I0501 02:49:26.044496    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:49:26.053026    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:49:26.053160    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:49:26.053600    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:49:26.054397    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:49:26.054397    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:49:26.070942    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:49:26.091568    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:49:26.143252    4712 start.go:296] duration metric: took 4.8863885s for postStartSetup
	I0501 02:49:26.147080    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:30.792456    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:49:30.796310    4712 start.go:128] duration metric: took 2m5.952044s to createHost
	I0501 02:49:30.796483    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:32.880619    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:35.468747    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:35.469470    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:35.469470    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531775.614259884
	
	I0501 02:49:35.611947    4712 fix.go:216] guest clock: 1714531775.614259884
	I0501 02:49:35.611947    4712 fix.go:229] Guest: 2024-05-01 02:49:35.614259884 +0000 UTC Remote: 2024-05-01 02:49:30.7963907 +0000 UTC m=+131.677772001 (delta=4.817869184s)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:40.253738    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:40.254896    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:40.261655    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:40.262498    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:40.262498    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531775
	I0501 02:49:40.415406    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:49:35 UTC 2024
	
	I0501 02:49:40.415406    4712 fix.go:236] clock set: Wed May  1 02:49:35 UTC 2024
	 (err=<nil>)
	I0501 02:49:40.415406    4712 start.go:83] releasing machines lock for "ha-136200", held for 2m15.5712031s
	I0501 02:49:40.416105    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:42.459145    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:45.033478    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:45.034063    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:45.038366    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:49:45.038515    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:45.050350    4712 ssh_runner.go:195] Run: cat /version.json
	I0501 02:49:45.050350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.230427    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:47.254252    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.254469    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.254558    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:49.922691    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.922867    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.923261    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.951021    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:50.022867    4712 ssh_runner.go:235] Completed: cat /version.json: (4.9724804s)
	I0501 02:49:50.037446    4712 ssh_runner.go:195] Run: systemctl --version
	I0501 02:49:50.123463    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0850592s)
	I0501 02:49:50.137756    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:49:50.147834    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:49:50.164262    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:49:50.197825    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:49:50.197877    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.197877    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:50.246918    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:49:50.281929    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:49:50.303725    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:49:50.317480    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:49:50.354607    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.392927    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:49:50.426684    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.464924    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:49:50.501540    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:49:50.541276    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:49:50.576278    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:49:50.614209    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:49:50.653144    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:49:50.688395    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:50.921067    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:49:50.960389    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.974435    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:49:51.020319    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.063731    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:49:51.113242    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.154151    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.196182    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:49:51.267621    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.297018    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:51.359019    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:49:51.382845    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:49:51.408532    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:49:51.459482    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:49:51.703156    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:49:51.928842    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:49:51.928842    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:49:51.985157    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:52.205484    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:49:54.768628    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5631253s)
	I0501 02:49:54.782717    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:49:54.821909    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:54.861989    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:49:55.097455    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:49:55.325878    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.547674    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:49:55.604800    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:55.648909    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.873886    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:49:55.987252    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:49:56.000254    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:49:56.009412    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:49:56.021925    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:49:56.041055    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:49:56.111426    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:49:56.124879    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.172644    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.210144    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:49:56.210144    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:49:56.231590    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:49:56.237056    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:49:56.273064    4712 kubeadm.go:877] updating cluster {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:49:56.273064    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:49:56.283976    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:49:56.305563    4712 docker.go:685] Got preloaded images: 
	I0501 02:49:56.305585    4712 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:49:56.319781    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:49:56.352980    4712 ssh_runner.go:195] Run: which lz4
	I0501 02:49:56.361434    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:49:56.376111    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:49:56.383203    4712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:49:56.383203    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:49:58.545920    4712 docker.go:649] duration metric: took 2.1838816s to copy over tarball
	I0501 02:49:58.559153    4712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:50:07.024882    4712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4656661s)
	I0501 02:50:07.024882    4712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:50:07.091273    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:50:07.117701    4712 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:50:07.169927    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:07.413870    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:50:10.777827    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.363932s)
	I0501 02:50:10.787955    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:50:10.813130    4712 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:50:10.813237    4712 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:50:10.813237    4712 kubeadm.go:928] updating node { 172.28.217.218 8443 v1.30.0 docker true true} ...
	I0501 02:50:10.813471    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.217.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:50:10.824528    4712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:50:10.865306    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:10.865306    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:10.865306    4712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:50:10.865306    4712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.217.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-136200 NodeName:ha-136200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.217.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.217.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:50:10.866013    4712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.217.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-136200"
	  kubeletExtraArgs:
	    node-ip: 172.28.217.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.217.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:50:10.866164    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:50:10.879856    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:50:10.916330    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:50:10.916590    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:50:10.930144    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:50:10.946847    4712 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:50:10.960617    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:50:10.980126    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 02:50:11.015010    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:50:11.046356    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:50:11.090122    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:50:11.151082    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:50:11.158193    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:50:11.198290    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:11.421704    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:50:11.457294    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.217.218
	I0501 02:50:11.457383    4712 certs.go:194] generating shared ca certs ...
	I0501 02:50:11.457383    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.458373    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:50:11.458865    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:50:11.459136    4712 certs.go:256] generating profile certs ...
	I0501 02:50:11.459821    4712 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:50:11.459950    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt with IP's: []
	I0501 02:50:11.600094    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt ...
	I0501 02:50:11.600094    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt: {Name:mkd5e4d205a603f84158daca3df4537a47f4507f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.601407    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key ...
	I0501 02:50:11.601407    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key: {Name:mk0f41aeab078751e43122e83e5a087fadc50acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.602800    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6
	I0501 02:50:11.602800    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.223.254]
	I0501 02:50:11.735634    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 ...
	I0501 02:50:11.735634    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6: {Name:mk25daf93f931731761fc26133f1d14b1615ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.736662    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 ...
	I0501 02:50:11.736662    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6: {Name:mk2e8ec633a20ca6bf6f004cdd1aa2dc02923e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.738036    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:50:11.750002    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:50:11.751999    4712 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:50:11.751999    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt with IP's: []
	I0501 02:50:11.858892    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt ...
	I0501 02:50:11.858892    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt: {Name:mk545c7bac57fe0475c68dabf35cf7726f7ba6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.860058    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key ...
	I0501 02:50:11.860058    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key: {Name:mk197c02f3ddea53477a395060c41fac8b486d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.861502    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:50:11.862042    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:50:11.862321    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:50:11.872340    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:50:11.872340    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:50:11.873220    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:50:11.874220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:50:11.875212    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:11.877013    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:50:11.928037    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:50:11.975033    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:50:12.024768    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:50:12.069813    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:50:12.117563    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:50:12.166940    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:50:12.214744    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:50:12.264780    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:50:12.314494    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:50:12.357210    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:50:12.407402    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:50:12.460345    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:50:12.486641    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:50:12.524534    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.531940    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.545887    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.569538    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:50:12.603111    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:50:12.640545    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.648489    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.664745    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.689236    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:50:12.722220    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:50:12.763152    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.771274    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.785811    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.809601    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:50:12.843815    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:50:12.851182    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:50:12.851596    4712 kubeadm.go:391] StartCluster: {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:50:12.861439    4712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:50:12.897822    4712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:50:12.930863    4712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:50:12.967142    4712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:50:12.989079    4712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:50:12.989174    4712 kubeadm.go:156] found existing configuration files:
	
	I0501 02:50:13.002144    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:50:13.022983    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:50:13.037263    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:50:13.070061    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:50:13.088170    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:50:13.104788    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:50:13.142331    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.161295    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:50:13.176372    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.217242    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:50:13.236623    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:50:13.250242    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:50:13.273719    4712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:50:13.796086    4712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:50:29.771938    4712 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:50:29.772562    4712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:50:29.772731    4712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:50:29.772731    4712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:50:29.775841    4712 out.go:204]   - Generating certificates and keys ...
	I0501 02:50:29.775841    4712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:50:29.776550    4712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:50:29.776704    4712 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:50:29.776918    4712 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:50:29.777081    4712 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:50:29.777841    4712 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.778067    4712 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:50:29.778150    4712 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:50:29.778250    4712 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:50:29.778341    4712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:50:29.778421    4712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:50:29.778724    4712 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:50:29.778804    4712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:50:29.778987    4712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:50:29.779083    4712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:50:29.779174    4712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:50:29.779418    4712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:50:29.781433    4712 out.go:204]   - Booting up control plane ...
	I0501 02:50:29.781433    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:50:29.781986    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:50:29.782154    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:50:29.782509    4712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:50:29.782778    4712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:50:29.782833    4712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:50:29.783188    4712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:50:29.783366    4712 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:50:29.783611    4712 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.012148578s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] The API server is healthy after 9.161500426s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:50:29.784343    4712 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:50:29.784449    4712 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:50:29.784907    4712 kubeadm.go:309] [mark-control-plane] Marking the node ha-136200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:50:29.785014    4712 kubeadm.go:309] [bootstrap-token] Using token: bebbcj.jj3pub0bsd9tcu95
	I0501 02:50:29.789897    4712 out.go:204]   - Configuring RBAC rules ...
	I0501 02:50:29.789897    4712 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:50:29.791324    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:50:29.791589    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:50:29.791711    4712 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:50:29.791958    4712 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.791958    4712 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:50:29.793244    4712 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:50:29.793244    4712 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:50:29.793868    4712 kubeadm.go:309] 
	I0501 02:50:29.794174    4712 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:50:29.794395    4712 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:50:29.794428    4712 kubeadm.go:309] 
	I0501 02:50:29.794531    4712 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--control-plane 
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.795522    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:50:29.795582    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:29.795642    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:29.798321    4712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:50:29.815739    4712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:50:29.823882    4712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:50:29.823882    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:50:29.880076    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:50:30.703674    4712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200 minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=true
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:30.736553    4712 ops.go:34] apiserver oom_adj: -16
	I0501 02:50:30.914646    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.422356    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.924569    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.422489    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.916374    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.419951    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.922300    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.426730    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.915815    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.415601    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.917473    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.419572    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.923752    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.424859    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.926096    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.415957    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.915894    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.417286    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.917110    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.418538    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.919363    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.420336    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.914423    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:42.068730    4712 kubeadm.go:1107] duration metric: took 11.364941s to wait for elevateKubeSystemPrivileges
	W0501 02:50:42.068870    4712 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:50:42.068934    4712 kubeadm.go:393] duration metric: took 29.2171223s to StartCluster
	I0501 02:50:42.069035    4712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.069065    4712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:42.070934    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.072021    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:50:42.072021    4712 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:42.072021    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:50:42.072021    4712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:50:42.072021    4712 addons.go:69] Setting storage-provisioner=true in profile "ha-136200"
	I0501 02:50:42.072578    4712 addons.go:234] Setting addon storage-provisioner=true in "ha-136200"
	I0501 02:50:42.072715    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:42.072765    4712 addons.go:69] Setting default-storageclass=true in profile "ha-136200"
	I0501 02:50:42.072820    4712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-136200"
	I0501 02:50:42.073003    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:42.073773    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.074332    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.237653    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:50:42.682536    4712 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.325924    4712 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:50:44.323327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.325924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.328653    4712 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:44.328653    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:50:44.328653    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:44.329300    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:44.330211    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:50:44.331266    4712 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:50:44.331692    4712 addons.go:234] Setting addon default-storageclass=true in "ha-136200"
	I0501 02:50:44.331692    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:44.332839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.573962    4712 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:46.573962    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:50:46.573962    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.693061    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.693131    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.693256    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:48.834701    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:49.381777    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:49.540602    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:51.475208    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:51.629340    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:51.811276    4712 round_trippers.go:463] GET https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:50:51.811902    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.811902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.811902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.826458    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:50:51.827458    4712 round_trippers.go:463] PUT https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:50:51.827458    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Content-Type: application/json
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.827458    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.831221    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:50:51.834740    4712 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:50:51.838052    4712 addons.go:505] duration metric: took 9.7659586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:50:51.838052    4712 start.go:245] waiting for cluster config update ...
	I0501 02:50:51.838052    4712 start.go:254] writing updated cluster config ...
	I0501 02:50:51.841165    4712 out.go:177] 
	I0501 02:50:51.854479    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:51.854479    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.861940    4712 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 02:50:51.865640    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:50:51.865640    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:50:51.865640    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:50:51.866174    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:50:51.866462    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.868358    4712 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:50:51.868358    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 02:50:51.869005    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:51.869070    4712 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 02:50:51.871919    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:50:51.872184    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:50:51.872184    4712 client.go:168] LocalClient.Create starting
	I0501 02:50:51.872730    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:53.846893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:57.208630    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:00.949038    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:51:01.496342    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: Creating VM...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:05.182621    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:51:05.182621    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:51:07.021151    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:51:07.022208    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:07.022208    4712 main.go:141] libmachine: Creating VHD
	I0501 02:51:07.022261    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:51:10.800515    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5C7D5B1-6A19-4B92-8073-0E024A878A77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:51:10.800843    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:51:10.813657    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:14.013713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -SizeBytes 20000MB
	I0501 02:51:16.613734    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:16.613973    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:16.614122    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m02 -DynamicMemoryEnabled $false
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:22.596839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m02 -Count 2
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\boot2docker.iso'
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:27.310044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd'
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: Starting VM...
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:35.347158    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:37.880551    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:37.881580    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:38.889792    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:41.091533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:43.621201    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:43.621813    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:44.622350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:50.423751    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:52.634051    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:56.229253    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:58.425395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:01.089224    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:03.247035    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:03.247253    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:03.247291    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:52:03.247449    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:05.493082    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:08.085777    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:08.101463    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:08.101463    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:52:08.244557    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:52:08.244557    4712 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 02:52:08.244557    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:10.396068    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:12.975111    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:12.975111    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:12.975111    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 02:52:13.142328    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 02:52:13.142479    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:17.993085    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:17.993267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:18.000242    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:18.000687    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:18.000687    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:52:18.164116    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:52:18.164116    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:52:18.164235    4712 buildroot.go:174] setting up certificates
	I0501 02:52:18.164235    4712 provision.go:84] configureAuth start
	I0501 02:52:18.164235    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:20.323803    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:20.324816    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:20.324954    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:25.037258    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:25.038231    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:25.038262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:27.637529    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:27.638462    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:27.638462    4712 provision.go:143] copyHostCerts
	I0501 02:52:27.638661    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:52:27.638979    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:52:27.639093    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:52:27.639613    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:52:27.640827    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:52:27.641053    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:52:27.642372    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:52:27.642643    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:52:27.642762    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:52:27.643264    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:52:27.644242    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.213.142 ha-136200-m02 localhost minikube]
	I0501 02:52:27.843189    4712 provision.go:177] copyRemoteCerts
	I0501 02:52:27.855361    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:52:27.855361    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:29.953607    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:32.549323    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:32.549356    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:32.549913    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:32.667202    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8118058s)
	I0501 02:52:32.667353    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:52:32.667882    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:52:32.721631    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:52:32.721631    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:52:32.771533    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:52:32.772177    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:52:32.825532    4712 provision.go:87] duration metric: took 14.6610374s to configureAuth
	I0501 02:52:32.825532    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:52:32.826094    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:52:32.826229    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:34.944371    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:37.500533    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:37.500590    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:37.506891    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:37.507395    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:37.507476    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:52:37.655757    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:52:37.655757    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:52:37.655757    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:52:37.656297    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:39.803012    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:42.365445    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:42.366335    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:42.372086    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:42.372086    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:42.372086    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:52:42.560633    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:52:42.560698    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:44.724351    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:47.350624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:47.350694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:47.356560    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:47.356887    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:47.356887    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:52:49.658910    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:52:49.658910    4712 machine.go:97] duration metric: took 46.4112065s to provisionDockerMachine
	I0501 02:52:49.659442    4712 client.go:171] duration metric: took 1m57.7858628s to LocalClient.Create
	I0501 02:52:49.659442    4712 start.go:167] duration metric: took 1m57.786395s to libmachine.API.Create "ha-136200"
	I0501 02:52:49.659503    4712 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 02:52:49.659537    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:52:49.675636    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:52:49.675636    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:51.837386    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:54.474409    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:54.475041    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:54.475353    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:54.588525    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9128536s)
	I0501 02:52:54.605879    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:52:54.614578    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:52:54.614578    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:52:54.615019    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:52:54.615983    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:52:54.616061    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:52:54.630716    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:52:54.652380    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:52:54.707179    4712 start.go:296] duration metric: took 5.0475618s for postStartSetup
	I0501 02:52:54.709671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:56.858662    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:59.468337    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:59.468783    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:59.468965    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:52:59.470910    4712 start.go:128] duration metric: took 2m7.6009059s to createHost
	I0501 02:52:59.471488    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:01.642528    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:04.224906    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:04.225471    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:04.225635    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:53:04.373720    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531984.377348123
	
	I0501 02:53:04.373720    4712 fix.go:216] guest clock: 1714531984.377348123
	I0501 02:53:04.373720    4712 fix.go:229] Guest: 2024-05-01 02:53:04.377348123 +0000 UTC Remote: 2024-05-01 02:52:59.4709109 +0000 UTC m=+340.350757801 (delta=4.906437223s)
	I0501 02:53:04.373851    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:06.540324    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:09.211685    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:09.212163    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:09.212163    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531984
	I0501 02:53:09.386381    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:53:04 UTC 2024
	
	I0501 02:53:09.386381    4712 fix.go:236] clock set: Wed May  1 02:53:04 UTC 2024
	 (err=<nil>)
	I0501 02:53:09.386381    4712 start.go:83] releasing machines lock for "ha-136200-m02", held for 2m17.5170158s
	I0501 02:53:09.386381    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:11.545938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:14.175393    4712 out.go:177] * Found network options:
	I0501 02:53:14.178428    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.181305    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.183961    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.186016    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:53:14.186987    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.190185    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:53:14.190185    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:14.201210    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:53:14.201210    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:16.404382    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:19.202467    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.202936    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.203019    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.238045    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.238494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.238494    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.303673    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1023631s)
	W0501 02:53:19.303730    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:53:19.322303    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:53:19.425813    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.234512s)
	I0501 02:53:19.425813    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:53:19.425869    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:19.426179    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:19.480110    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:53:19.516304    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:53:19.540429    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:53:19.554725    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:53:19.592793    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.638122    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:53:19.676636    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.716798    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:53:19.755079    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:53:19.792962    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:53:19.828507    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:53:19.864630    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:53:19.900003    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:53:19.933687    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:20.164043    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:53:20.200981    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:20.214486    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:53:20.252522    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.291404    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:53:20.342446    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.384719    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.433485    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:53:20.493558    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.521863    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:20.572266    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:53:20.592650    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:53:20.612894    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:53:20.662972    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:53:20.893661    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:53:21.103995    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:53:21.104092    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:53:21.153897    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:21.367769    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:53:23.926290    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5584356s)
	I0501 02:53:23.942886    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:53:23.985733    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.029327    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:53:24.262777    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:53:24.474412    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:24.701708    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:53:24.747995    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.789968    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:25.013627    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:53:25.132301    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:53:25.147412    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:53:25.161719    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:53:25.177972    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:53:25.198441    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:53:25.257309    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:53:25.270183    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.317675    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.366446    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:53:25.369267    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:53:25.371205    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:53:25.380319    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:53:25.380407    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:53:25.393209    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:53:25.400057    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:25.423648    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:53:25.424611    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:53:25.425544    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:27.528822    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:27.530295    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.213.142
	I0501 02:53:27.530371    4712 certs.go:194] generating shared ca certs ...
	I0501 02:53:27.530371    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.531276    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:53:27.531739    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:53:27.531846    4712 certs.go:256] generating profile certs ...
	I0501 02:53:27.532594    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:53:27.532748    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12
	I0501 02:53:27.532985    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.223.254]
	I0501 02:53:27.709722    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 ...
	I0501 02:53:27.709722    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12: {Name:mk2a82749362965014fb3e2d8d662f7a4e7e9cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.711740    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 ...
	I0501 02:53:27.711740    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12: {Name:mkb73c4ed44f1dd1b8f82d46a1302578ac6f367b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.712120    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:53:27.726267    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:53:27.727349    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:53:27.728383    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:53:27.728582    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:53:27.728825    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:53:27.729015    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:53:27.729253    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:53:27.729653    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:53:27.729899    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:53:27.730538    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:53:27.730538    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:53:27.730927    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:53:27.731437    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:53:27.731866    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:53:27.732310    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:53:27.732905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:27.733131    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:53:27.733384    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:53:27.733671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:29.906678    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:32.470407    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:32.580880    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:53:32.588963    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:53:32.624993    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:53:32.635801    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:53:32.670832    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:53:32.678812    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:53:32.713791    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:53:32.721308    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:53:32.760244    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:53:32.767565    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:53:32.804387    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:53:32.811905    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:53:32.832394    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:53:32.885891    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:53:32.936137    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:53:32.994824    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:53:33.054042    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:53:33.105998    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:53:33.156026    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:53:33.205426    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:53:33.264385    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:53:33.316776    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:53:33.368293    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:53:33.420965    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:53:33.458001    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:53:33.499072    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:53:33.534603    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:53:33.570373    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:53:33.602430    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:53:33.635495    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:53:33.684802    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:53:33.709070    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:53:33.743711    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.750970    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.765746    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.787709    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:53:33.828429    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:53:33.866546    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.874255    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.888580    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.910501    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:53:33.948720    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:53:33.993042    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.001989    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.015762    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.040058    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:53:34.077501    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:53:34.086036    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:53:34.086573    4712 kubeadm.go:928] updating node {m02 172.28.213.142 8443 v1.30.0 docker true true} ...
	I0501 02:53:34.086726    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.213.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:53:34.086726    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:53:34.101653    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:53:34.130866    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:53:34.131029    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:53:34.145238    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.165400    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:53:34.180369    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0501 02:53:35.468257    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.481254    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.488247    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:53:35.489247    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:53:35.546630    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.559624    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.626048    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:53:35.627145    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:53:36.028150    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:53:36.077312    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.090870    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.109939    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:53:36.111871    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:53:36.821139    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:53:36.843821    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:53:36.878070    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:53:36.917707    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:53:36.971960    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:53:36.979482    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:37.020702    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:37.250249    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:53:37.282989    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:37.299000    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:53:37.299000    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:53:37.299000    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:39.433070    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:42.012855    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:42.240815    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9416996s)
	I0501 02:53:42.240889    4712 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:53:42.240889    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443"
	I0501 02:54:27.807891    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443": (45.5666728s)
	I0501 02:54:27.808014    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:54:28.660694    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m02 minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:54:28.861404    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:54:29.035785    4712 start.go:318] duration metric: took 51.7364106s to joinCluster
	I0501 02:54:29.035979    4712 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:29.038999    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:54:29.036818    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:29.055991    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:54:29.482004    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:54:29.532870    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:54:29.534181    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:54:29.534386    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:54:29.535958    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:29.536236    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:29.536236    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:29.536236    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:29.536353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:29.609745    4712 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0501 02:54:30.045557    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.045557    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.045557    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.045557    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.051535    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:30.542020    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.542083    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.542148    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.542148    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.549047    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.050630    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.050694    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.050694    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.050694    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.063209    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:31.542025    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.542098    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.542098    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.542098    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.548667    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.549663    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:32.050097    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.050097    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.050174    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.050174    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.054568    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:32.542017    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.542017    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.542017    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.542017    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.546488    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:33.050866    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.050866    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.050866    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.050866    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.056451    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:33.538033    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.538033    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.538033    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.538033    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.713541    4712 round_trippers.go:574] Response Status: 200 OK in 175 milliseconds
	I0501 02:54:33.714615    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:34.041226    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.041226    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.041226    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.041226    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.047903    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:34.547454    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.547454    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.547757    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.547757    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.552099    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.046877    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.046877    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.046877    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.046877    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.052278    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:35.548463    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.548463    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.548740    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.548740    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.558660    4712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:54:35.560213    4712 node_ready.go:49] node "ha-136200-m02" has status "Ready":"True"
	I0501 02:54:35.560213    4712 node_ready.go:38] duration metric: took 6.0241453s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:35.560332    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:35.560422    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:35.560422    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.560422    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.560422    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.572050    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:54:35.581777    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.581924    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:54:35.581924    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.581924    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.581924    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.585770    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:35.587608    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.587685    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.587685    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.587685    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.591867    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.591867    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.591867    4712 pod_ready.go:81] duration metric: took 10.0903ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:54:35.591867    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.591867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.591867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.596249    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.597880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.597964    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.597964    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.597964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.602327    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.603007    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.603007    4712 pod_ready.go:81] duration metric: took 11.1397ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.603007    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.604166    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:54:35.604211    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.604211    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.604211    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.610508    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:35.611831    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.611831    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.611831    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.611831    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.627921    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:54:35.629498    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.629498    4712 pod_ready.go:81] duration metric: took 26.4906ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:35.629498    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.629498    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.629498    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.638393    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:35.638911    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.638911    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.638911    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.639550    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.643473    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:36.140037    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.140167    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.140167    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.140167    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.148123    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:36.149580    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.149580    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.149659    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.149659    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.155530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:36.644340    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.644340    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.644340    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.644340    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.651321    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:36.652588    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.653128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.653128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.653128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.660377    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.144534    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.144656    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.144656    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.144656    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.150598    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:37.152092    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.152665    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.152665    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.152665    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.160441    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.644104    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.644239    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.644239    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.644239    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.649836    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.650604    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.650671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.650671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.650671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.654947    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.656164    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:38.142505    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.142505    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.142505    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.142505    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.149100    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.151258    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.151347    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.151347    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.151347    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.155677    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:38.643186    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.643241    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.643241    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.643241    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.650578    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.651873    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.651902    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.651902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.651902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.655946    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.142681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.142681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.142681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.142681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.148315    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:39.149953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.150203    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.150203    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.150203    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.154771    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.643364    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.643364    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.643364    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.643364    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.649703    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:39.650947    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.650947    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.651009    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.651009    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.654949    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:39.656190    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:40.142428    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.142428    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.142676    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.142676    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.148562    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.149792    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.149792    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.149792    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.149792    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.154808    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.644095    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.644095    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.644095    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.644095    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.650441    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:40.651544    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.651598    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.651598    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.651598    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.662172    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:41.143094    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.143187    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.143187    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.143187    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.148870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.150018    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.150018    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.150018    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.150018    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.156810    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:41.640508    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.640624    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.640624    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.640624    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.646018    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.646730    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.647318    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.647318    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.647318    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.652880    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.139900    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.139985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.139985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.139985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.145577    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.146383    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.146383    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.146448    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.146448    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.151141    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.151862    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:42.639271    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.639271    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.639271    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.639271    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.642318    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.646671    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.646671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.646671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.646671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.651360    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.137151    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.137496    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.137496    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.137496    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.141750    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.142959    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.142959    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.142959    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.142959    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.147560    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.641950    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.641985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.641985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.641985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.647599    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.649299    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.649350    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.649350    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.649350    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.657034    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:43.658043    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.658043    4712 pod_ready.go:81] duration metric: took 8.0284866s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:54:43.658043    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.658043    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.658043    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.664394    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.664394    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.664394    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.668848    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.669848    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.669848    4712 pod_ready.go:81] duration metric: took 11.805ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:54:43.669848    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.669848    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.670830    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.674754    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:43.676699    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.676699    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.676699    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.676699    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.681632    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.683231    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.683231    4712 pod_ready.go:81] duration metric: took 13.3825ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683231    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:54:43.683412    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.683412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.683412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.688589    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.690255    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.690255    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.690325    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.690325    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.695853    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.696818    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.696860    4712 pod_ready.go:81] duration metric: took 13.6296ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696912    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696993    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:54:43.697029    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.697029    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.697029    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.701912    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.703032    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.703736    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.703736    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.703736    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.706383    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:54:43.707734    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.707824    4712 pod_ready.go:81] duration metric: took 10.9115ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.707824    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.845210    4712 request.go:629] Waited for 137.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.845681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.845681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.851000    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.047046    4712 request.go:629] Waited for 194.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.047200    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.047200    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.052548    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.053735    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.053735    4712 pod_ready.go:81] duration metric: took 345.9086ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.053735    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.250128    4712 request.go:629] Waited for 196.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.250128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.250128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.254761    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:44.456435    4712 request.go:629] Waited for 200.6839ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.456435    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.456435    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.461480    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.462518    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.462578    4712 pod_ready.go:81] duration metric: took 408.7057ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.462578    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.648779    4712 request.go:629] Waited for 185.8104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.648953    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.649128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.654457    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.855621    4712 request.go:629] Waited for 199.4812ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.855706    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.855706    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.861147    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.861147    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.861699    4712 pod_ready.go:81] duration metric: took 399.1179ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.861778    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.042766    4712 request.go:629] Waited for 180.9309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.042766    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.042766    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.047379    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:45.244553    4712 request.go:629] Waited for 197.0101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.244553    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.244553    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.250870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:45.252485    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:45.252485    4712 pod_ready.go:81] duration metric: took 390.7033ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.252547    4712 pod_ready.go:38] duration metric: took 9.6921442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:45.252619    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:54:45.266607    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:45.298538    4712 api_server.go:72] duration metric: took 16.2624407s to wait for apiserver process to appear ...
	I0501 02:54:45.298538    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:54:45.298642    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:54:45.308804    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:54:45.308804    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:54:45.308804    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.308804    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.308804    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.310764    4712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:54:45.311165    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:54:45.311238    4712 api_server.go:131] duration metric: took 12.7003ms to wait for apiserver health ...
	I0501 02:54:45.311238    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:54:45.446869    4712 request.go:629] Waited for 135.3903ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.446869    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.446869    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.455463    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:45.466055    4712 system_pods.go:59] 17 kube-system pods found
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.466055    4712 system_pods.go:74] duration metric: took 154.8157ms to wait for pod list to return data ...
	I0501 02:54:45.466055    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:54:45.650374    4712 request.go:629] Waited for 183.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.650566    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.650566    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.661544    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:45.662734    4712 default_sa.go:45] found service account: "default"
	I0501 02:54:45.662869    4712 default_sa.go:55] duration metric: took 196.812ms for default service account to be created ...
	I0501 02:54:45.662869    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:54:45.853192    4712 request.go:629] Waited for 189.9269ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.853419    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.853419    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.865601    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:45.872777    4712 system_pods.go:86] 17 kube-system pods found
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.873383    4712 system_pods.go:126] duration metric: took 210.5126ms to wait for k8s-apps to be running ...
	I0501 02:54:45.873383    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:54:45.886040    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:45.914966    4712 system_svc.go:56] duration metric: took 41.5829ms WaitForService to wait for kubelet
	I0501 02:54:45.915054    4712 kubeadm.go:576] duration metric: took 16.8789526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:54:45.915054    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:54:46.043164    4712 request.go:629] Waited for 127.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:46.043164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:46.043310    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:46.050320    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:105] duration metric: took 136.4457ms to run NodePressure ...
	I0501 02:54:46.051501    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:54:46.051501    4712 start.go:254] writing updated cluster config ...
	I0501 02:54:46.055981    4712 out.go:177] 
	I0501 02:54:46.073210    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:46.073681    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.079155    4712 out.go:177] * Starting "ha-136200-m03" control-plane node in "ha-136200" cluster
	I0501 02:54:46.082550    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:54:46.082550    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:54:46.083028    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:54:46.083223    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:54:46.083384    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.091748    4712 start.go:360] acquireMachinesLock for ha-136200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:54:46.091748    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m03"
	I0501 02:54:46.091748    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:46.091748    4712 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0501 02:54:46.099863    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:54:46.100178    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:54:46.100178    4712 client.go:168] LocalClient.Create starting
	I0501 02:54:46.100178    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101128    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:54:49.970242    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:51.553966    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:55.358013    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:54:55.879042    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: Creating VM...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:58.933270    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:54:58.933728    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:55:00.789675    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:00.789938    4712 main.go:141] libmachine: Creating VHD
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:55:04.583967    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AAB86B48-3D75-4842-8FF8-3BDEC4AB86C2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:55:04.584134    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:55:04.594277    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -SizeBytes 20000MB
	I0501 02:55:10.391210    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:10.391245    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:10.391352    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:14.151882    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m03 -DynamicMemoryEnabled $false
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:16.478022    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m03 -Count 2
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:18.717850    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\boot2docker.iso'
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd'
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:24.025533    4712 main.go:141] libmachine: Starting VM...
	I0501 02:55:24.025533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m03
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:27.131722    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:55:27.131722    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:29.452098    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:29.453035    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:29.453089    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:32.025441    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:32.026234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:33.036612    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:37.849230    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:37.849355    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:38.854379    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:44.621333    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:49.475402    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:49.476316    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:50.480573    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:52.724713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:55.379189    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:57.536246    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:55:57.536246    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:59.681292    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:59.681842    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:59.682021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:02.302435    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:02.303031    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:02.303031    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:56:02.440858    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:56:02.440919    4712 buildroot.go:166] provisioning hostname "ha-136200-m03"
	I0501 02:56:02.440919    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:04.541126    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:07.118513    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:07.119097    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:07.119097    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m03 && echo "ha-136200-m03" | sudo tee /etc/hostname
	I0501 02:56:07.274395    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m03
	
	I0501 02:56:07.274395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:09.427222    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:12.066151    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:12.066558    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:12.072701    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:12.073263    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:12.073263    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:56:12.226572    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:56:12.226572    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:56:12.226572    4712 buildroot.go:174] setting up certificates
	I0501 02:56:12.226572    4712 provision.go:84] configureAuth start
	I0501 02:56:12.226572    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:14.383697    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:14.383832    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:14.383916    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:17.017056    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:19.246383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:21.887343    4712 provision.go:143] copyHostCerts
	I0501 02:56:21.887688    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:56:21.887688    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:56:21.887688    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:56:21.888470    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:56:21.889606    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:56:21.890069    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:56:21.890132    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:56:21.890553    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:56:21.891611    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:56:21.891800    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:56:21.891800    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:56:21.892337    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:56:21.893162    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m03 san=[127.0.0.1 172.28.216.62 ha-136200-m03 localhost minikube]
	I0501 02:56:21.973101    4712 provision.go:177] copyRemoteCerts
	I0501 02:56:21.993116    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:56:21.993116    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:24.170031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:26.830749    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:26.831099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:26.831162    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:26.935784    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9426327s)
	I0501 02:56:26.935784    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:56:26.936266    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:56:26.985792    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:56:26.986191    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:56:27.035460    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:56:27.036450    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:56:27.092775    4712 provision.go:87] duration metric: took 14.8660953s to configureAuth
	I0501 02:56:27.092775    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:56:27.093873    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:56:27.094011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:29.214442    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:31.848020    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:31.848124    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:31.859047    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:31.859047    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:31.859047    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:56:31.983811    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:56:31.983936    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:56:31.984160    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:56:31.984160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:34.146837    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:36.793925    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:36.794747    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:36.801153    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:36.801782    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:36.801782    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	Environment="NO_PROXY=172.28.217.218,172.28.213.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:56:36.960579    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	Environment=NO_PROXY=172.28.217.218,172.28.213.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:56:36.960579    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:39.141800    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:41.765565    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:41.766216    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:41.774239    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:41.774411    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:41.774411    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:56:43.994230    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:56:43.994313    4712 machine.go:97] duration metric: took 46.4577313s to provisionDockerMachine
	I0501 02:56:43.994313    4712 client.go:171] duration metric: took 1m57.8932783s to LocalClient.Create
	I0501 02:56:43.994313    4712 start.go:167] duration metric: took 1m57.8932783s to libmachine.API.Create "ha-136200"
	I0501 02:56:43.994428    4712 start.go:293] postStartSetup for "ha-136200-m03" (driver="hyperv")
	I0501 02:56:43.994473    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:56:44.010383    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:56:44.010383    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:46.225048    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:46.225772    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:46.225844    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:48.919679    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:49.032380    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0219067s)
	I0501 02:56:49.045700    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:56:49.054180    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:56:49.054180    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:56:49.054700    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:56:49.055002    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:56:49.055574    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:56:49.071048    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:56:49.092423    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:56:49.143151    4712 start.go:296] duration metric: took 5.1486851s for postStartSetup
	I0501 02:56:49.146034    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:51.349851    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:51.350067    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:51.350153    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:54.016657    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:54.017149    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:54.017380    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:56:54.019460    4712 start.go:128] duration metric: took 2m7.9267809s to createHost
	I0501 02:56:54.019460    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:58.818618    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:58.819348    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:58.819348    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:56:58.949732    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532218.937413126
	
	I0501 02:56:58.949732    4712 fix.go:216] guest clock: 1714532218.937413126
	I0501 02:56:58.949732    4712 fix.go:229] Guest: 2024-05-01 02:56:58.937413126 +0000 UTC Remote: 2024-05-01 02:56:54.0194605 +0000 UTC m=+574.897601601 (delta=4.917952626s)
	I0501 02:56:58.949941    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:01.096436    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:03.657161    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:57:03.657803    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:57:03.657803    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532218
	I0501 02:57:03.807080    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:56:58 UTC 2024
	
	I0501 02:57:03.807174    4712 fix.go:236] clock set: Wed May  1 02:56:58 UTC 2024
	 (err=<nil>)
	I0501 02:57:03.807174    4712 start.go:83] releasing machines lock for "ha-136200-m03", held for 2m17.7144231s
	I0501 02:57:03.807438    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:08.605250    4712 out.go:177] * Found network options:
	I0501 02:57:08.607292    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.612307    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.619160    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:57:08.619160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:08.631565    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:57:08.631565    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:10.838360    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838735    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.624235    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.648439    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.648490    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.648768    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.732596    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1009937s)
	W0501 02:57:13.732596    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:57:13.748662    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:57:13.811529    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:57:13.811529    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:13.811529    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1923313s)
	I0501 02:57:13.812665    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:13.867675    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:57:13.906069    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:57:13.929632    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:57:13.947027    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:57:13.986248    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.024920    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:57:14.061978    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.099821    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:57:14.138543    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:57:14.181270    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:57:14.217808    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:57:14.261794    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:57:14.297051    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:57:14.332222    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:14.558529    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:57:14.595594    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:14.610122    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:57:14.650440    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.689246    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:57:14.740013    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.780524    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.822987    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:57:14.889904    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.919061    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:14.983590    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:57:15.008856    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:57:15.032703    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:57:15.086991    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:57:15.324922    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:57:15.542551    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:57:15.542551    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:57:15.594658    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:15.826063    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:57:18.399291    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5732092s)
	I0501 02:57:18.412657    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:57:18.452282    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:18.491033    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:57:18.702768    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:57:18.928695    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.145438    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:57:19.199070    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:19.242280    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.475811    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:57:19.598548    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:57:19.612590    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:57:19.624279    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:57:19.637235    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:57:19.657683    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:57:19.721351    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:57:19.734095    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.784976    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.822576    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:57:19.826041    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:57:19.827741    4712 out.go:177]   - env NO_PROXY=172.28.217.218,172.28.213.142
	I0501 02:57:19.831635    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:57:19.851676    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:57:19.858242    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:19.883254    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:57:19.883656    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:57:19.884140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:22.018331    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:22.018592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:22.018658    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:22.019393    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.216.62
	I0501 02:57:22.019393    4712 certs.go:194] generating shared ca certs ...
	I0501 02:57:22.019393    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.020318    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:57:22.020786    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:57:22.021028    4712 certs.go:256] generating profile certs ...
	I0501 02:57:22.021028    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:57:22.021606    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9
	I0501 02:57:22.021767    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.216.62 172.28.223.254]
	I0501 02:57:22.149544    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 ...
	I0501 02:57:22.149544    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9: {Name:mk4837fbdb29e34df2c44991c600cda784a93d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.150373    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 ...
	I0501 02:57:22.150373    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9: {Name:mkcff5432d26e17c25cf2a9709eb4553a31e99c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.152472    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:57:22.165924    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:57:22.166444    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:57:22.166444    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:57:22.167623    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:57:22.168122    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:57:22.168280    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:57:22.169490    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:57:22.169490    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:57:22.170595    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:57:22.170869    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:57:22.171164    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:57:22.171434    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:57:22.171670    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:57:22.172911    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:24.374904    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:26.980450    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:27.093857    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:57:27.102183    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:57:27.141690    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:57:27.150194    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:57:27.193806    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:57:27.202957    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:57:27.254044    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:57:27.262605    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:57:27.303214    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:57:27.310453    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:57:27.348966    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:57:27.356382    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:57:27.383468    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:57:27.437872    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:57:27.494095    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:57:27.544977    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:57:27.599083    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:57:27.652123    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:57:27.710792    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:57:27.766379    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:57:27.817284    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:57:27.867949    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:57:27.930560    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:57:27.987875    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:57:28.025174    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:57:28.061492    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:57:28.099323    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:57:28.133169    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:57:28.168585    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:57:28.223450    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:57:28.292690    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:57:28.315882    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:57:28.353000    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.365096    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.379858    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.406814    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:57:28.445706    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:57:28.482484    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.491120    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.507367    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.535421    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:57:28.574647    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:57:28.616757    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.624484    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.642485    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.665148    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:57:28.706630    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:57:28.714508    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:57:28.714998    4712 kubeadm.go:928] updating node {m03 172.28.216.62 8443 v1.30.0 docker true true} ...
	I0501 02:57:28.715189    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.216.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:57:28.715218    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:57:28.727524    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:57:28.767475    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:57:28.767631    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:57:28.783398    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.801741    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:57:28.815792    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:57:28.838101    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:57:28.838226    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:57:28.838396    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.855124    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:57:28.856182    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.858128    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.881905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.881905    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:57:28.882027    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:57:28.882165    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:57:28.882277    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:57:28.898781    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.959439    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:57:28.959688    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:57:30.251192    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:57:30.272192    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:57:30.311119    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:57:30.353248    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:57:30.407414    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:57:30.415360    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:30.454450    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:30.696464    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:57:30.737201    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:30.801844    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:57:30.802126    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:57:30.802234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:32.962279    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:35.601356    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:35.838006    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0358438s)
	I0501 02:57:35.838006    4712 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:57:35.838006    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443"
	I0501 02:58:21.819619    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443": (45.981274s)
	I0501 02:58:21.819619    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:58:22.593318    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m03 minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:58:22.788566    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:58:22.987611    4712 start.go:318] duration metric: took 52.1853822s to joinCluster
	I0501 02:58:22.987895    4712 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:58:23.012496    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:58:22.988142    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:58:23.031751    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:58:23.569395    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:58:23.619961    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:58:23.620228    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:58:23.620770    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:58:23.621670    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:23.621910    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:23.621910    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:23.621982    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:23.621982    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:23.637444    4712 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:58:24.133658    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.133658    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.133658    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.133658    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.139465    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:24.622867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.622867    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.622867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.622867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.629524    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.129429    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.129429    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.129510    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.129510    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.135754    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.633954    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.633954    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.633954    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.633954    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.638650    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:25.639656    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:26.123894    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.123894    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.123894    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.123894    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.129103    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:26.629161    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.629161    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.629161    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.629161    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.648167    4712 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 02:58:27.136028    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.136028    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.136028    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.136028    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.326021    4712 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0501 02:58:27.623480    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.623600    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.623600    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.623600    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.629035    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:28.136433    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.136433    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.136626    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.136626    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.203923    4712 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0501 02:58:28.205553    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:28.636021    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.636185    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.636185    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.636185    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.646735    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:29.122451    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.122515    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.122515    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.122515    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.140826    4712 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:58:29.629756    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.629756    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.629756    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.629756    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.637588    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:30.132174    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.132269    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.132269    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.132269    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.136921    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:30.632939    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.633022    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.633022    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.633022    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.638815    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:30.640044    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:31.133378    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.133378    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.133378    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.133378    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.138754    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:31.633444    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.633511    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.633511    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.633511    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.639686    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:32.131317    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.131317    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.131317    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.131317    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.136414    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:32.629649    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.629649    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.629649    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.629649    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.634980    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:33.129878    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.129878    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.129878    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.129878    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.155125    4712 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0501 02:58:33.156557    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:33.629865    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.630060    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.630060    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.630060    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.636368    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:34.128412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.128412    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.128412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.128412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.133022    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:34.629333    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.629333    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.629333    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.629333    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.635358    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.129272    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.129376    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.129376    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.129376    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.136662    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.137446    4712 node_ready.go:49] node "ha-136200-m03" has status "Ready":"True"
	I0501 02:58:35.137492    4712 node_ready.go:38] duration metric: took 11.5157372s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:35.137492    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:35.137635    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:35.137635    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.137635    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.137635    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.149133    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:58:35.158917    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.159445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:58:35.159565    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.159565    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.159651    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.170650    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:35.172026    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.172026    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.172026    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.172026    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:58:35.180770    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.180770    4712 pod_ready.go:81] duration metric: took 21.3241ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:58:35.180770    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.180770    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.185805    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:35.187056    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.187056    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.187056    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.187056    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.191361    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.192405    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.192405    4712 pod_ready.go:81] duration metric: took 11.6358ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:58:35.192405    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.192405    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.192405    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.196117    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.197312    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.197312    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.197389    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.197389    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.201195    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.201924    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.201924    4712 pod_ready.go:81] duration metric: took 9.5188ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.201924    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.202054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:58:35.202195    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.202195    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.202195    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.208450    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.209323    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:35.209323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.209323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.209323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.212935    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.214190    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.214190    4712 pod_ready.go:81] duration metric: took 12.2652ms for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.214190    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.330301    4712 request.go:629] Waited for 115.8713ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.330574    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.330574    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.338021    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.534070    4712 request.go:629] Waited for 194.5208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.534353    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.534353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.540932    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.541927    4712 pod_ready.go:92] pod "etcd-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.541927    4712 pod_ready.go:81] duration metric: took 327.673ms for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.541927    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.737879    4712 request.go:629] Waited for 195.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.738683    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.738683    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.743861    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.940254    4712 request.go:629] Waited for 195.0246ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.940254    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.940254    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.943091    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:35.949355    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.949355    4712 pod_ready.go:81] duration metric: took 407.425ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.949355    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.143537    4712 request.go:629] Waited for 193.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143801    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143835    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.143835    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.143835    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.149992    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.331653    4712 request.go:629] Waited for 180.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.331653    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.331653    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.337290    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.338458    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.338521    4712 pod_ready.go:81] duration metric: took 389.1629ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.338521    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.533514    4712 request.go:629] Waited for 194.8709ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.533967    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.534181    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.534181    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.534181    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.548236    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:58:36.737561    4712 request.go:629] Waited for 188.1304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737864    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737942    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.737942    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.738002    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.742410    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:36.743400    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.743400    4712 pod_ready.go:81] duration metric: took 404.8131ms for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.743400    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.942223    4712 request.go:629] Waited for 198.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.942445    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.942445    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.947749    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.131974    4712 request.go:629] Waited for 183.3149ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132232    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.132323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.132323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.137476    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.138446    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.138446    4712 pod_ready.go:81] duration metric: took 395.044ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.138446    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.333778    4712 request.go:629] Waited for 195.2258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.334044    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.334044    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.338704    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:37.538179    4712 request.go:629] Waited for 197.0874ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538437    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538500    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.538500    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.538500    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.544773    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:37.544773    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.544773    4712 pod_ready.go:81] duration metric: took 406.3235ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.544773    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.743876    4712 request.go:629] Waited for 199.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.744106    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.744106    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.749628    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.931954    4712 request.go:629] Waited for 180.0772ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932132    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.932132    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.932132    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.937302    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.937875    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.937875    4712 pod_ready.go:81] duration metric: took 393.0991ms for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.937875    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.134928    4712 request.go:629] Waited for 196.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.134928    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.135164    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.135164    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.135164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.151320    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:58:38.340422    4712 request.go:629] Waited for 186.7144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.340523    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.340523    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.344835    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:38.346933    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.347124    4712 pod_ready.go:81] duration metric: took 409.2461ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.347124    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.529397    4712 request.go:629] Waited for 182.0139ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529776    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.529776    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.529776    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.535530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.733704    4712 request.go:629] Waited for 197.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.733854    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.733854    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.739456    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.741035    4712 pod_ready.go:92] pod "kube-proxy-9ml9x" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.741035    4712 pod_ready.go:81] duration metric: took 393.9082ms for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.741141    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.936294    4712 request.go:629] Waited for 194.9804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.936492    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.936492    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.941904    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.139076    4712 request.go:629] Waited for 195.5675ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.139516    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.139590    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.146156    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:39.146839    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.147389    4712 pod_ready.go:81] duration metric: took 406.2452ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.147389    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.331771    4712 request.go:629] Waited for 183.3466ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.331839    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.331839    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.338962    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:39.529638    4712 request.go:629] Waited for 189.8551ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.529880    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.529880    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.535423    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.536281    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.536496    4712 pod_ready.go:81] duration metric: took 389.1041ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.536496    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.733532    4712 request.go:629] Waited for 196.8225ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733532    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733755    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.733755    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.733755    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.738768    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.936556    4712 request.go:629] Waited for 196.8464ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.936957    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.937066    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.942275    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.942447    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.943009    4712 pod_ready.go:81] duration metric: took 406.5101ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.943009    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.137743    4712 request.go:629] Waited for 194.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.138045    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.138045    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.143795    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.340161    4712 request.go:629] Waited for 194.6485ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.340368    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.340368    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.346127    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.347243    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:40.347243    4712 pod_ready.go:81] duration metric: took 404.2307ms for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.347243    4712 pod_ready.go:38] duration metric: took 5.2097122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:40.347243    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:58:40.361809    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:58:40.399669    4712 api_server.go:72] duration metric: took 17.4115847s to wait for apiserver process to appear ...
	I0501 02:58:40.399766    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:58:40.399822    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:58:40.410080    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:58:40.410375    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:58:40.410503    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.410503    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.410503    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.412638    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:40.413725    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:58:40.413725    4712 api_server.go:131] duration metric: took 13.9591ms to wait for apiserver health ...
	I0501 02:58:40.413725    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:58:40.543767    4712 request.go:629] Waited for 129.9651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.543975    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.543975    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.554206    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.565423    4712 system_pods.go:59] 24 kube-system pods found
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.566039    4712 system_pods.go:74] duration metric: took 152.3128ms to wait for pod list to return data ...
	I0501 02:58:40.566039    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:58:40.731110    4712 request.go:629] Waited for 164.8435ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.731110    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.731110    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.736937    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.737529    4712 default_sa.go:45] found service account: "default"
	I0501 02:58:40.737568    4712 default_sa.go:55] duration metric: took 171.5277ms for default service account to be created ...
	I0501 02:58:40.737568    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:58:40.936328    4712 request.go:629] Waited for 198.4062ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.936390    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.936390    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.946796    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.961809    4712 system_pods.go:86] 24 kube-system pods found
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.962497    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.962521    4712 system_pods.go:126] duration metric: took 224.9515ms to wait for k8s-apps to be running ...
	I0501 02:58:40.962521    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:58:40.975583    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:58:41.007354    4712 system_svc.go:56] duration metric: took 44.7618ms WaitForService to wait for kubelet
	I0501 02:58:41.007354    4712 kubeadm.go:576] duration metric: took 18.0193266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:58:41.007354    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:58:41.140806    4712 request.go:629] Waited for 133.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140922    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140964    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:41.140964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:41.141046    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:41.151428    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:41.153995    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154053    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154053    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:105] duration metric: took 146.7575ms to run NodePressure ...
	I0501 02:58:41.154113    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:58:41.154113    4712 start.go:254] writing updated cluster config ...
	I0501 02:58:41.168562    4712 ssh_runner.go:195] Run: rm -f paused
	I0501 02:58:41.321592    4712 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:58:41.326673    4712 out.go:177] * Done! kubectl is now configured to use "ha-136200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 03:00:25 ha-136200 dockerd[1329]: 2024/05/01 03:00:25 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:47 ha-136200 dockerd[1329]: 2024/05/01 03:04:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:04:48 ha-136200 dockerd[1329]: 2024/05/01 03:04:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:41 ha-136200 dockerd[1329]: 2024/05/01 03:06:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:41 ha-136200 dockerd[1329]: 2024/05/01 03:06:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:41 ha-136200 dockerd[1329]: 2024/05/01 03:06:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:06:42 ha-136200 dockerd[1329]: 2024/05/01 03:06:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 03:08:32 ha-136200 dockerd[1329]: 2024/05/01 03:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb23816e7b6b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   14 minutes ago      Running             busybox                   0                   c61d49828a30c       busybox-fc5497c4f-6mlkh
	229343dc7dba5       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   54bbf0662d422       coredns-7db6d8ff4d-rm4gm
	247f815bf0531       6e38f40d628db                                                                                         23 minutes ago      Running             storage-provisioner       0                   aaa3d1f50041e       storage-provisioner
	822aaf8c270e3       cbb01a7bd410d                                                                                         23 minutes ago      Running             coredns                   0                   cadf8314e6ab7       coredns-7db6d8ff4d-2j8mj
	c09511b7df643       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              23 minutes ago      Running             kindnet-cni               0                   bdd01e6cca1ed       kindnet-sj2rc
	562cd55ab9702       a0bf559e280cf                                                                                         23 minutes ago      Running             kube-proxy                0                   579e0dba427c2       kube-proxy-8f67k
	1c063bfe224cd       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     23 minutes ago      Running             kube-vip                  0                   7f28f99b3c8a8       kube-vip-ha-136200
	b6454ceb34cad       259c8277fcbbc                                                                                         23 minutes ago      Running             kube-scheduler            0                   e6cf1f3e651b3       kube-scheduler-ha-136200
	8ff4bf0570939       c42f13656d0b2                                                                                         23 minutes ago      Running             kube-apiserver            0                   2455e947d4906       kube-apiserver-ha-136200
	8fa3aa565b366       c7aad43836fa5                                                                                         23 minutes ago      Running             kube-controller-manager   0                   c7e42fd34e247       kube-controller-manager-ha-136200
	8b0d01885db55       3861cfcd7c04c                                                                                         23 minutes ago      Running             etcd                      0                   da46759fd8e15       etcd-ha-136200
	
	
	==> coredns [229343dc7dba] <==
	[INFO] 10.244.1.2:38893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.138771945s
	[INFO] 10.244.1.2:42460 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000276501s
	[INFO] 10.244.1.2:46275 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000672s
	[INFO] 10.244.2.2:34687 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040099987s
	[INFO] 10.244.2.2:56378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000284202s
	[INFO] 10.244.2.2:56092 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000345802s
	[INFO] 10.244.2.2:52745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000349302s
	[INFO] 10.244.2.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095201s
	[INFO] 10.244.0.4:51567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000267102s
	[INFO] 10.244.0.4:33148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178701s
	[INFO] 10.244.1.2:43398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000089301s
	[INFO] 10.244.1.2:52211 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001122s
	[INFO] 10.244.1.2:35470 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013228661s
	[INFO] 10.244.1.2:40781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174701s
	[INFO] 10.244.1.2:45257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000274201s
	[INFO] 10.244.1.2:36114 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165601s
	[INFO] 10.244.2.2:56600 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000371102s
	[INFO] 10.244.2.2:39742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000250502s
	[INFO] 10.244.0.4:45933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116901s
	[INFO] 10.244.0.4:53681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082001s
	[INFO] 10.244.2.2:38830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232701s
	[INFO] 10.244.0.4:51196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001489507s
	[INFO] 10.244.0.4:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264301s
	[INFO] 10.244.0.4:43314 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.013461063s
	[INFO] 10.244.1.2:41778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092301s
	
	
	==> coredns [822aaf8c270e] <==
	[INFO] 10.244.2.2:41813 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000217501s
	[INFO] 10.244.2.2:54888 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032885853s
	[INFO] 10.244.0.4:49712 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126101s
	[INFO] 10.244.0.4:55974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012564658s
	[INFO] 10.244.0.4:45253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139901s
	[INFO] 10.244.0.4:60045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001515s
	[INFO] 10.244.0.4:39879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175501s
	[INFO] 10.244.0.4:42089 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000310501s
	[INFO] 10.244.1.2:53821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111101s
	[INFO] 10.244.1.2:42651 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116201s
	[INFO] 10.244.2.2:34505 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.2.2:54873 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001606s
	[INFO] 10.244.0.4:60573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001105s
	[INFO] 10.244.0.4:37086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000727s
	[INFO] 10.244.1.2:52370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123901s
	[INFO] 10.244.1.2:35190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277501s
	[INFO] 10.244.1.2:42611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158301s
	[INFO] 10.244.1.2:36993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000280201s
	[INFO] 10.244.2.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206701s
	[INFO] 10.244.2.2:37229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092101s
	[INFO] 10.244.2.2:56027 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001251s
	[INFO] 10.244.0.4:55246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211601s
	[INFO] 10.244.1.2:57784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270801s
	[INFO] 10.244.1.2:39482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001183s
	[INFO] 10.244.1.2:53277 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078801s
	
	
	==> describe nodes <==
	Name:               ha-136200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:14:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:09:44 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:09:44 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:09:44 +0000   Wed, 01 May 2024 02:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:09:44 +0000   Wed, 01 May 2024 02:50:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.217.218
	  Hostname:    ha-136200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd5a02b3729c454c81fac1ddb77470ea
	  System UUID:                feb48805-7018-ee45-9dd1-70d50cb8dabe
	  Boot ID:                    f931e3ee-8c2d-4859-8d97-8671a4247530
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6mlkh              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-2j8mj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 coredns-7db6d8ff4d-rm4gm             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-ha-136200                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-sj2rc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-136200             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-136200    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-8f67k                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-136200             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-136200                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node ha-136200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node ha-136200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node ha-136200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  NodeReady                23m                kubelet          Node ha-136200 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-136200 event: Registered Node ha-136200 in Controller
	
	
	Name:               ha-136200-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:54:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:07:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 03:04:35 +0000   Wed, 01 May 2024 03:07:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.213.142
	  Hostname:    ha-136200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b20b8a63378b4be990a267d65bc5017b
	  System UUID:                f54ef658-ded9-8245-9d86-fec94474eff5
	  Boot ID:                    b6a9b4fd-1abd-4ef4-a3a8-bc0c39ab4624
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pc6wt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-136200-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-kb2x4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-136200-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-136200-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-zj5jv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-136200-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-136200-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  RegisteredNode           19m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-136200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-136200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-136200-m02 event: Registered Node ha-136200-m02 in Controller
	  Normal  NodeNotReady             6m22s              node-controller  Node ha-136200-m02 status is now: NodeNotReady
	
	
	Name:               ha-136200-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-136200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=ha-136200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 02:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-136200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:14:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:09:57 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:09:57 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:09:57 +0000   Wed, 01 May 2024 02:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:09:57 +0000   Wed, 01 May 2024 02:58:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.216.62
	  Hostname:    ha-136200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 352997c1e27d48bb8dff5ae5f17e228a
	  System UUID:                0e4a669f-6d5f-be47-a143-5d2db1558741
	  Boot ID:                    8ce378d2-4a7e-40de-aab0-8bc599c3d157
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2gr4g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-136200-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-rlfkk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-136200-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-136200-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-9ml9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-136200-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-136200-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-136200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-136200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-136200-m03 event: Registered Node ha-136200-m03 in Controller
	
	
	==> dmesg <==
	[  +7.445343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 02:49] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.218573] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.318095] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.121878] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.646066] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.241331] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.276456] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.872310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.245693] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.234055] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.318386] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[May 1 02:50] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.117675] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.894847] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.744854] systemd-fstab-generator[1728]: Ignoring "noauto" option for root device
	[  +0.118239] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.246999] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.464074] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[ +14.473066] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.151247] kauditd_printk_skb: 29 callbacks suppressed
	[May 1 02:54] kauditd_printk_skb: 26 callbacks suppressed
	[May 1 03:02] hrtimer: interrupt took 2691714 ns
	
	
	==> etcd [8b0d01885db5] <==
	{"level":"warn","ts":"2024-05-01T03:14:09.363643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.369061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.387224Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://172.28.213.142:2380/version","remote-member-id":"e80b4c0e2412e141","error":"Get \"https://172.28.213.142:2380/version\": dial tcp 172.28.213.142:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-01T03:14:09.387281Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e80b4c0e2412e141","error":"Get \"https://172.28.213.142:2380/version\": dial tcp 172.28.213.142:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-01T03:14:09.468658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.494183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.593487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.60109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.616934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.620911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.628153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.63954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.652293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.657529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.668597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.668929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.677773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.68853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.694871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.698666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.701212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.714566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.724499Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.735214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-01T03:14:09.7687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d5cb0dbd3e937195","from":"d5cb0dbd3e937195","remote-peer-id":"e80b4c0e2412e141","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 03:14:09 up 25 min,  0 users,  load average: 0.46, 0.56, 0.44
	Linux ha-136200 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c09511b7df64] <==
	I0501 03:13:23.829167       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:13:33.840135       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:13:33.840184       1 main.go:227] handling current node
	I0501 03:13:33.840198       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:13:33.840205       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:13:33.840788       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:13:33.840892       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:13:43.859572       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:13:43.859835       1 main.go:227] handling current node
	I0501 03:13:43.859851       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:13:43.859860       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:13:43.860192       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:13:43.860363       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:13:53.876747       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:13:53.877027       1 main.go:227] handling current node
	I0501 03:13:53.877239       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:13:53.877372       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:13:53.877568       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:13:53.877715       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	I0501 03:14:03.895458       1 main.go:223] Handling node with IPs: map[172.28.217.218:{}]
	I0501 03:14:03.895507       1 main.go:227] handling current node
	I0501 03:14:03.895521       1 main.go:223] Handling node with IPs: map[172.28.213.142:{}]
	I0501 03:14:03.895528       1 main.go:250] Node ha-136200-m02 has CIDR [10.244.1.0/24] 
	I0501 03:14:03.896085       1 main.go:223] Handling node with IPs: map[172.28.216.62:{}]
	I0501 03:14:03.896188       1 main.go:250] Node ha-136200-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ff4bf057093] <==
	Trace[670363995]: [511.709143ms] [511.709143ms] END
	I0501 02:54:22.977601       1 trace.go:236] Trace[1452834138]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,client:172.28.213.142,api-group:storage.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:csinodes,scope:resource,url:/apis/storage.k8s.io/v1/csinodes,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (01-May-2024 02:54:22.472) (total time: 504ms):
	Trace[1452834138]: ["Create etcd3" audit-id:f62db0d2-4e8e-4640-9a4d-0aa19a03aa34,key:/csinodes/ha-136200-m02,type:*storage.CSINode,resource:csinodes.storage.k8s.io 504ms (02:54:22.473)
	Trace[1452834138]:  ---"Txn call succeeded" 503ms (02:54:22.977)]
	Trace[1452834138]: [504.731076ms] [504.731076ms] END
	E0501 02:58:15.730056       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0501 02:58:15.730169       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0501 02:58:15.730071       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0501 02:58:15.731583       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0501 02:58:15.732500       1 timeout.go:142] post-timeout activity - time-elapsed: 2.647619ms, PATCH "/api/v1/namespaces/default/events/ha-136200-m03.17cb3e09c56bb983" result: <nil>
	E0501 02:59:25.456065       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61414: use of closed network connection
	E0501 02:59:26.016855       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61416: use of closed network connection
	E0501 02:59:26.743048       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61418: use of closed network connection
	E0501 02:59:27.423392       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61421: use of closed network connection
	E0501 02:59:28.036056       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61423: use of closed network connection
	E0501 02:59:28.618704       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61425: use of closed network connection
	E0501 02:59:29.166283       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61427: use of closed network connection
	E0501 02:59:29.771114       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61429: use of closed network connection
	E0501 02:59:30.328866       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61431: use of closed network connection
	E0501 02:59:31.360058       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61434: use of closed network connection
	E0501 02:59:41.926438       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61436: use of closed network connection
	E0501 02:59:42.497809       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61439: use of closed network connection
	E0501 02:59:53.089743       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61441: use of closed network connection
	E0501 02:59:53.660135       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61443: use of closed network connection
	E0501 03:00:04.225188       1 conn.go:339] Error on socket receive: read tcp 172.28.223.254:8443->172.28.208.1:61445: use of closed network connection
	
	
	==> kube-controller-manager [8fa3aa565b36] <==
	I0501 02:58:14.901209       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-136200-m03\" does not exist"
	I0501 02:58:14.933592       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-136200-m03" podCIDRs=["10.244.2.0/24"]
	I0501 02:58:16.990389       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-136200-m03"
	I0501 02:59:18.914466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.158562ms"
	I0501 02:59:19.095324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.785078ms"
	I0501 02:59:19.461767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="365.331283ms"
	I0501 02:59:19.490263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.541695ms"
	I0501 02:59:19.490899       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.9µs"
	I0501 02:59:21.446166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.9µs"
	I0501 02:59:21.996495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.097772ms"
	I0501 02:59:21.997082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.301µs"
	I0501 02:59:22.122170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.415164ms"
	I0501 02:59:22.122332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.3µs"
	I0501 02:59:22.485058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.861489ms"
	I0501 02:59:22.485150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.8µs"
	I0501 03:07:47.413030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.476887ms"
	I0501 03:07:47.413260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.901µs"
	I0501 03:12:48.241927       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-pc6wt"
	I0501 03:12:48.286618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.101µs"
	I0501 03:12:48.490423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.793576ms"
	I0501 03:12:48.510724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.163517ms"
	I0501 03:12:48.513112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="200.501µs"
	I0501 03:12:48.529700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.6µs"
	I0501 03:12:48.596343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.278724ms"
	I0501 03:12:48.596783       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.5µs"
	
	
	==> kube-proxy [562cd55ab970] <==
	I0501 02:50:44.069527       1 server_linux.go:69] "Using iptables proxy"
	I0501 02:50:44.111745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.217.218"]
	I0501 02:50:44.171562       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:50:44.171703       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:50:44.171730       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:50:44.178320       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:50:44.180232       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:50:44.180271       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:50:44.184544       1 config.go:192] "Starting service config controller"
	I0501 02:50:44.185913       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:50:44.186319       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:50:44.186555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:50:44.189915       1 config.go:319] "Starting node config controller"
	I0501 02:50:44.190110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:50:44.287624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:50:44.287761       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:50:44.290292       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b6454ceb34ca] <==
	W0501 02:50:26.797411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0501 02:50:26.797624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0501 02:50:26.830216       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.830267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:26.925545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 02:50:26.925605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0501 02:50:26.948130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:26.948245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.027771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 02:50:27.028119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 02:50:27.045542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 02:50:27.045577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 02:50:27.049002       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:50:27.049031       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:50:30.148462       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 02:59:18.858485       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pc6wt" node="ha-136200-m03"
	E0501 02:59:18.859227       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pc6wt\": pod busybox-fc5497c4f-pc6wt is already assigned to node \"ha-136200-m02\"" pod="default/busybox-fc5497c4f-pc6wt"
	E0501 02:59:18.932248       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.932355       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 10f52d20-5605-40b5-8875-ceb0cb5c2e53(default/busybox-fc5497c4f-6mlkh) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6mlkh"
	E0501 02:59:18.932383       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6mlkh\": pod busybox-fc5497c4f-6mlkh is already assigned to node \"ha-136200\"" pod="default/busybox-fc5497c4f-6mlkh"
	I0501 02:59:18.932412       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6mlkh" node="ha-136200"
	E0501 02:59:18.934021       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	E0501 02:59:18.934194       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b6febdff-c378-4d33-94ae-8b321071f921(default/busybox-fc5497c4f-2gr4g) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-2gr4g"
	E0501 02:59:18.934386       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-2gr4g\": pod busybox-fc5497c4f-2gr4g is already assigned to node \"ha-136200-m03\"" pod="default/busybox-fc5497c4f-2gr4g"
	I0501 02:59:18.937753       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-2gr4g" node="ha-136200-m03"
	
	
	==> kubelet <==
	May 01 03:09:29 ha-136200 kubelet[2230]: E0501 03:09:29.308386    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:09:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:09:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:09:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:09:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:10:29 ha-136200 kubelet[2230]: E0501 03:10:29.309317    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:10:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:10:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:10:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:10:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:11:29 ha-136200 kubelet[2230]: E0501 03:11:29.306238    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:11:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:11:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:11:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:11:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:12:29 ha-136200 kubelet[2230]: E0501 03:12:29.308230    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:12:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:12:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:12:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:12:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:13:29 ha-136200 kubelet[2230]: E0501 03:13:29.305587    2230 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:13:29 ha-136200 kubelet[2230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:13:29 ha-136200 kubelet[2230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:13:29 ha-136200 kubelet[2230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:13:29 ha-136200 kubelet[2230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:14:01.418081    5284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200: (12.351563s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-136200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-88vn8
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-136200 describe pod busybox-fc5497c4f-88vn8
helpers_test.go:282: (dbg) kubectl --context ha-136200 describe pod busybox-fc5497c4f-88vn8:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-88vn8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7wvj (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-c7wvj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  96s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  96s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (315.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (223.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-136200 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-136200 -v=7 --alsologtostderr
E0501 03:16:34.972231   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
ha_test.go:462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p ha-136200 -v=7 --alsologtostderr: exit status 1 (2m25.4577973s)

                                                
                                                
-- stdout --
	* Stopping node "ha-136200-m04"  ...
	* Powering off "ha-136200-m04" via SSH ...
	* Stopping node "ha-136200-m03"  ...
	* Powering off "ha-136200-m03" via SSH ...
	* Stopping node "ha-136200-m02"  ...
	* Powering off "ha-136200-m02" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:14:53.827982    6840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 03:14:53.915760    6840 out.go:291] Setting OutFile to fd 916 ...
	I0501 03:14:53.917004    6840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:14:53.917004    6840 out.go:304] Setting ErrFile to fd 548...
	I0501 03:14:53.917063    6840 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:14:53.934964    6840 out.go:298] Setting JSON to false
	I0501 03:14:53.935728    6840 daemonize_windows.go:44] trying to kill existing schedule stop for profile ha-136200...
	I0501 03:14:53.949970    6840 ssh_runner.go:195] Run: systemctl --version
	I0501 03:14:53.949970    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 03:14:56.114643    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:14:56.114712    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:14:56.114822    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 03:14:58.728335    6840 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 03:14:58.728335    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:14:58.728335    6840 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 03:14:58.840034    6840 ssh_runner.go:235] Completed: systemctl --version: (4.8900279s)
	I0501 03:14:58.854126    6840 ssh_runner.go:195] Run: sudo systemctl stop minikube-scheduled-stop
	I0501 03:14:58.881924    6840 mustload.go:65] Loading cluster: ha-136200
	I0501 03:14:58.883329    6840 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:14:58.883571    6840 stop.go:39] StopHost: ha-136200-m04
	I0501 03:14:58.893305    6840 out.go:177] * Stopping node "ha-136200-m04"  ...
	I0501 03:14:58.895820    6840 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 03:14:58.912032    6840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 03:14:58.912032    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:15:01.013619    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:01.013657    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:01.013776    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:15:03.581182    6840 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:15:03.581182    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:03.581182    6840 sshutil.go:53] new ssh client: &{IP:172.28.217.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m04\id_rsa Username:docker}
	I0501 03:15:03.697311    6840 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (4.7851139s)
	I0501 03:15:03.711813    6840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 03:15:03.801009    6840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 03:15:03.867517    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:15:05.990611    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:05.990611    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:05.996158    6840 out.go:177] * Powering off "ha-136200-m04" via SSH ...
	I0501 03:15:05.998959    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:15:08.160609    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:08.161214    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:08.161317    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m04 ).networkadapters[0]).ipaddresses[0]
	I0501 03:15:10.761770    6840 main.go:141] libmachine: [stdout =====>] : 172.28.217.174
	
	I0501 03:15:10.761770    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:10.767866    6840 main.go:141] libmachine: Using SSH client type: native
	I0501 03:15:10.768458    6840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.174 22 <nil> <nil>}
	I0501 03:15:10.768458    6840 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0501 03:15:10.932952    6840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:15:10.933019    6840 stop.go:100] poweroff result: out=, err=<nil>
	I0501 03:15:10.933074    6840 main.go:141] libmachine: Stopping "ha-136200-m04"...
	I0501 03:15:10.933074    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:15:13.805905    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:13.806513    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:13.806513    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM ha-136200-m04
	I0501 03:15:33.760937    6840 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:15:33.760937    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:33.761190    6840 main.go:141] libmachine: Waiting for host to stop...
	I0501 03:15:33.761190    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:15:35.970857    6840 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:15:35.970857    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:35.971145    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m04 ).state
	I0501 03:15:38.098260    6840 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:15:38.098260    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:38.098260    6840 main.go:141] libmachine: Machine "ha-136200-m04" was stopped.
	I0501 03:15:38.098260    6840 stop.go:75] duration metric: took 39.2021445s to stop
	I0501 03:15:38.098260    6840 stop.go:39] StopHost: ha-136200-m03
	I0501 03:15:38.102925    6840 out.go:177] * Stopping node "ha-136200-m03"  ...
	I0501 03:15:38.109171    6840 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 03:15:38.122087    6840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 03:15:38.122087    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:15:40.302222    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:40.302222    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:40.302222    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:15:42.907751    6840 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:15:42.907751    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:42.909181    6840 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 03:15:43.025402    6840 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (4.9032781s)
	I0501 03:15:43.038041    6840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 03:15:43.118625    6840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 03:15:43.188768    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:15:45.355282    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:45.355282    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:45.361354    6840 out.go:177] * Powering off "ha-136200-m03" via SSH ...
	I0501 03:15:45.365500    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:15:47.607722    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:47.608633    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:47.608731    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 03:15:50.310713    6840 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 03:15:50.311526    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:50.317208    6840 main.go:141] libmachine: Using SSH client type: native
	I0501 03:15:50.317909    6840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 03:15:50.317909    6840 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0501 03:15:50.487728    6840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:15:50.487728    6840 stop.go:100] poweroff result: out=, err=<nil>
	I0501 03:15:50.487728    6840 main.go:141] libmachine: Stopping "ha-136200-m03"...
	I0501 03:15:50.487728    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:15:53.418225    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:15:53.418225    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:15:53.418463    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM ha-136200-m03
	I0501 03:16:09.231789    6840 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:16:09.231789    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:09.231789    6840 main.go:141] libmachine: Waiting for host to stop...
	I0501 03:16:09.231963    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:16:11.508529    6840 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:16:11.508529    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:11.508795    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 03:16:13.706391    6840 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:16:13.707406    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:13.707698    6840 main.go:141] libmachine: Machine "ha-136200-m03" was stopped.
	I0501 03:16:13.707698    6840 stop.go:75] duration metric: took 35.5982563s to stop
	I0501 03:16:13.707736    6840 stop.go:39] StopHost: ha-136200-m02
	I0501 03:16:13.827273    6840 out.go:177] * Stopping node "ha-136200-m02"  ...
	I0501 03:16:13.864454    6840 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0501 03:16:13.881340    6840 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0501 03:16:13.881340    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:16:16.103498    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:16:16.104511    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:16.104511    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:16:18.773500    6840 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:16:18.773500    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:18.774490    6840 sshutil.go:53] new ssh client: &{IP:172.28.221.64 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 03:16:18.887914    6840 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.006381s)
	I0501 03:16:18.903513    6840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0501 03:16:18.984526    6840 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0501 03:16:19.100342    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:16:21.284747    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:16:21.284747    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:21.365749    6840 out.go:177] * Powering off "ha-136200-m02" via SSH ...
	I0501 03:16:21.522836    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:16:23.716503    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:16:23.717470    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:23.717470    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:16:26.364865    6840 main.go:141] libmachine: [stdout =====>] : 172.28.221.64
	
	I0501 03:16:26.364865    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:26.372082    6840 main.go:141] libmachine: Using SSH client type: native
	I0501 03:16:26.372765    6840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.221.64 22 <nil> <nil>}
	I0501 03:16:26.372765    6840 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0501 03:16:26.525797    6840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:16:26.525862    6840 stop.go:100] poweroff result: out=, err=<nil>
	I0501 03:16:26.525862    6840 main.go:141] libmachine: Stopping "ha-136200-m02"...
	I0501 03:16:26.525862    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:16:29.407035    6840 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:16:29.407188    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:16:29.407188    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM ha-136200-m02
	I0501 03:17:16.304145    6840 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:17:16.304216    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:17:16.304216    6840 main.go:141] libmachine: Waiting for host to stop...
	I0501 03:17:16.304216    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 03:17:18.555823    6840 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 03:17:18.555823    6840 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:17:18.555823    6840 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-windows-amd64.exe node list -p ha-136200 -v=7 --alsologtostderr" : exit status 1
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-136200 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ha-136200 --wait=true -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:469: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p ha-136200 -v=7 --alsologtostderr" : context deadline exceeded
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-136200
ha_test.go:472: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p ha-136200: context deadline exceeded (0s)
ha_test.go:474: failed to run node list. args "out/minikube-windows-amd64.exe node list -p ha-136200" : context deadline exceeded
ha_test.go:479: reported node list is not the same after restart. Before restart: ha-136200	172.28.217.218
ha-136200-m02	172.28.221.64
ha-136200-m03	172.28.216.62
ha-136200-m04	172.28.217.174

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-136200 -n ha-136200: exit status 2 (27.9503654s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:17:19.308120   14224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 logs -n 25: (19.5342361s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-136200 -- apply -f             | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- rollout status       | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | deployment/busybox                   |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- get pods -o          | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-2gr4g              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-2gr4g -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-6mlkh              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-6mlkh -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC | 01 May 24 02:59 UTC |
	|         | busybox-fc5497c4f-pc6wt              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-136200 -- exec                 | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 02:59 UTC |                     |
	|         | busybox-fc5497c4f-pc6wt -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1            |           |                   |         |                     |                     |
	| node    | add -p ha-136200 -v=7                | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 03:00 UTC |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-136200 node stop m02 -v=7         | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 03:06 UTC | 01 May 24 03:07 UTC |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-136200 node start m02 -v=7        | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 03:09 UTC |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | list -p ha-136200 -v=7               | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 03:14 UTC |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| stop    | -p ha-136200 -v=7                    | ha-136200 | minikube6\jenkins | v1.33.0 | 01 May 24 03:14 UTC |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:47:19
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:47:19.308853    4712 out.go:291] Setting OutFile to fd 968 ...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.308853    4712 out.go:304] Setting ErrFile to fd 940...
	I0501 02:47:19.308853    4712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:47:19.335053    4712 out.go:298] Setting JSON to false
	I0501 02:47:19.338050    4712 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104693,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:47:19.338050    4712 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:47:19.343676    4712 out.go:177] * [ha-136200] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:47:19.347056    4712 notify.go:220] Checking for updates...
	I0501 02:47:19.349570    4712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:47:19.352627    4712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:47:19.356010    4712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:47:19.359527    4712 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:47:19.364982    4712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:47:19.368342    4712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:47:24.771909    4712 out.go:177] * Using the hyperv driver based on user configuration
	I0501 02:47:24.777503    4712 start.go:297] selected driver: hyperv
	I0501 02:47:24.777503    4712 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:47:24.777503    4712 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:47:24.830749    4712 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:47:24.832155    4712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:47:24.832679    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:47:24.832679    4712 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 02:47:24.832679    4712 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 02:47:24.832944    4712 start.go:340] cluster config:
	{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:47:24.832944    4712 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:47:24.837439    4712 out.go:177] * Starting "ha-136200" primary control-plane node in "ha-136200" cluster
	I0501 02:47:24.839631    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:47:24.839631    4712 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:47:24.839631    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:47:24.840411    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:47:24.840411    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:47:24.841147    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:47:24.841147    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json: {Name:mk622c10e63d8ff69d285ce96c3e57bfbed6a54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:360] acquireMachinesLock for ha-136200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:47:24.842583    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200"
	I0501 02:47:24.843334    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:47:24.843334    4712 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 02:47:24.845982    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:47:24.845982    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:47:24.845982    4712 client.go:168] LocalClient.Create starting
	I0501 02:47:24.847217    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.847395    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.847705    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:47:24.848012    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:47:24.848048    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:47:24.848190    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:47:27.058462    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:47:27.058678    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:27.058786    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:28.892262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:30.440921    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:30.441172    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:34.074968    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:34.075096    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:34.077782    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:47:34.612276    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: Creating VM...
	I0501 02:47:34.775454    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:47:37.663991    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:47:37.664390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:37.664515    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:47:37.664599    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:47:39.498071    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:39.498234    4712 main.go:141] libmachine: Creating VHD
	I0501 02:47:39.498234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:47:43.230384    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2B9E163F-083E-4714-9C44-9A52BE438E53
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:47:43.231369    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:43.231468    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:47:43.231550    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:47:43.241482    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:46.427724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd' -SizeBytes 20000MB
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:48.971690    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:48.971981    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:47:52.766292    4712 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-136200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:47:52.766504    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:52.766592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200 -DynamicMemoryEnabled $false
	I0501 02:47:54.972628    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:54.972799    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200 -Count 2
	I0501 02:47:57.167635    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:57.168510    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\boot2docker.iso'
	I0501 02:47:59.728585    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:47:59.729288    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\disk.vhd'
	I0501 02:48:02.387014    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:02.387925    4712 main.go:141] libmachine: Starting VM...
	I0501 02:48:02.387925    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:05.442902    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:48:05.442902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:07.690543    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:07.691267    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:10.234874    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:11.244005    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:13.447426    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:13.447532    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:16.003794    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:17.014251    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:19.230596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:19.231015    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:21.786798    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:22.791035    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:24.970362    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:24.970583    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:24.970826    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:27.538082    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:48:27.539108    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:28.540044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:30.691694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:30.692065    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:33.315166    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:33.315400    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:35.453800    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:35.454723    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:48:35.454940    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:37.590850    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:37.591294    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:37.591378    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:40.152942    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:40.153017    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:40.158939    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:40.170076    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:40.170076    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:48:40.311850    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:48:40.311938    4712 buildroot.go:166] provisioning hostname "ha-136200"
	I0501 02:48:40.312011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:42.387259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:42.388241    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:44.941487    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:44.942306    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:44.948681    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:44.949642    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:44.949718    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200 && echo "ha-136200" | sudo tee /etc/hostname
	I0501 02:48:45.123416    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200
	
	I0501 02:48:45.123490    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:47.247911    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:47.248892    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:49.912733    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:49.920164    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:48:49.920164    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:48:49.920749    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:48:50.089597    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:48:50.089597    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:48:50.089597    4712 buildroot.go:174] setting up certificates
	I0501 02:48:50.090153    4712 provision.go:84] configureAuth start
	I0501 02:48:50.090240    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:52.251893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:54.810990    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:54.811881    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:56.907196    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:48:59.487351    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:48:59.487402    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:48:59.487402    4712 provision.go:143] copyHostCerts
	I0501 02:48:59.487402    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:48:59.487402    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:48:59.487402    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:48:59.488365    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:48:59.489448    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:48:59.489632    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:48:59.489632    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:48:59.490981    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:48:59.491187    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:48:59.491187    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:48:59.492726    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200 san=[127.0.0.1 172.28.217.218 ha-136200 localhost minikube]
	I0501 02:48:59.577887    4712 provision.go:177] copyRemoteCerts
	I0501 02:48:59.596375    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:48:59.597286    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:01.699383    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:01.699540    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:04.258891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:04.259427    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:04.371852    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7744315s)
	I0501 02:49:04.371852    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:49:04.371852    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:49:04.422302    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:49:04.422602    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0501 02:49:04.478176    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:49:04.478176    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:49:04.532091    4712 provision.go:87] duration metric: took 14.4416362s to configureAuth
	I0501 02:49:04.532091    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:49:04.532690    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:49:04.532690    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:06.623956    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:06.624197    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:09.238280    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:09.238979    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:09.245381    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:09.246060    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:09.246060    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:49:09.397759    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:49:09.397835    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:49:09.398290    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:49:09.398464    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:11.514026    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:11.514582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:14.050483    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:14.057033    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:14.057033    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:14.057589    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:49:14.242724    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:49:14.242724    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:16.392645    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:16.392749    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:18.993701    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:18.994302    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:19.000048    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:19.000537    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:19.000616    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:49:21.256124    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:49:21.256675    4712 machine.go:97] duration metric: took 45.8016127s to provisionDockerMachine
	I0501 02:49:21.256675    4712 client.go:171] duration metric: took 1m56.4098314s to LocalClient.Create
	I0501 02:49:21.256737    4712 start.go:167] duration metric: took 1m56.4098939s to libmachine.API.Create "ha-136200"
	I0501 02:49:21.256791    4712 start.go:293] postStartSetup for "ha-136200" (driver="hyperv")
	I0501 02:49:21.256828    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:49:21.271031    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:49:21.271031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:23.374454    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:23.374634    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:23.374716    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:25.918831    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:25.919441    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:26.030251    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.759185s)
	I0501 02:49:26.044496    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:49:26.053026    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:49:26.053160    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:49:26.053600    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:49:26.054397    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:49:26.054397    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:49:26.070942    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:49:26.091568    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:49:26.143252    4712 start.go:296] duration metric: took 4.8863885s for postStartSetup
	I0501 02:49:26.147080    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:28.257985    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:30.792456    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:30.792456    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:49:30.796310    4712 start.go:128] duration metric: took 2m5.952044s to createHost
	I0501 02:49:30.796483    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:32.879711    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:32.880619    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:35.462032    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:35.468747    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:35.469470    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:35.469470    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531775.614259884
	
	I0501 02:49:35.611947    4712 fix.go:216] guest clock: 1714531775.614259884
	I0501 02:49:35.611947    4712 fix.go:229] Guest: 2024-05-01 02:49:35.614259884 +0000 UTC Remote: 2024-05-01 02:49:30.7963907 +0000 UTC m=+131.677772001 (delta=4.817869184s)
	I0501 02:49:35.611947    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:37.726021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:40.253738    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:40.254896    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:40.261655    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:49:40.262498    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.217.218 22 <nil> <nil>}
	I0501 02:49:40.262498    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531775
	I0501 02:49:40.415406    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:49:35 UTC 2024
	
	I0501 02:49:40.415406    4712 fix.go:236] clock set: Wed May  1 02:49:35 UTC 2024
	 (err=<nil>)
	I0501 02:49:40.415406    4712 start.go:83] releasing machines lock for "ha-136200", held for 2m15.5712031s
	I0501 02:49:40.416105    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:42.459145    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:42.459226    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:45.033478    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:45.034063    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:45.038366    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:49:45.038515    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:45.050350    4712 ssh_runner.go:195] Run: cat /version.json
	I0501 02:49:45.050350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.229701    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.230427    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:47.254252    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:49:47.254469    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:47.254558    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:49:49.922691    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.922867    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.923261    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:49:49.950446    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:49:49.951021    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:49:50.022867    4712 ssh_runner.go:235] Completed: cat /version.json: (4.9724804s)
	I0501 02:49:50.037446    4712 ssh_runner.go:195] Run: systemctl --version
	I0501 02:49:50.123463    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0850592s)
	I0501 02:49:50.137756    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:49:50.147834    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:49:50.164262    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:49:50.197825    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:49:50.197877    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.197877    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:50.246918    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:49:50.281929    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:49:50.303725    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:49:50.317480    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:49:50.354607    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.392927    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:49:50.426684    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:49:50.464924    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:49:50.501540    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:49:50.541276    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:49:50.576278    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:49:50.614209    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:49:50.653144    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:49:50.688395    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:50.921067    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:49:50.960389    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:49:50.974435    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:49:51.020319    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.063731    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:49:51.113242    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:49:51.154151    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.196182    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:49:51.267621    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:49:51.297018    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:49:51.359019    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:49:51.382845    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:49:51.408532    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:49:51.459482    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:49:51.703156    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:49:51.928842    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:49:51.928842    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:49:51.985157    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:52.205484    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:49:54.768628    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5631253s)
	I0501 02:49:54.782717    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:49:54.821909    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:54.861989    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:49:55.097455    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:49:55.325878    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.547674    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:49:55.604800    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:49:55.648909    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:49:55.873886    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:49:55.987252    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:49:56.000254    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:49:56.009412    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:49:56.021925    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:49:56.041055    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:49:56.111426    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:49:56.124879    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.172644    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:49:56.210144    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:49:56.210144    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:49:56.214663    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:49:56.218539    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:49:56.231590    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:49:56.237056    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:49:56.273064    4712 kubeadm.go:877] updating cluster {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:49:56.273064    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:49:56.283976    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:49:56.305563    4712 docker.go:685] Got preloaded images: 
	I0501 02:49:56.305585    4712 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 02:49:56.319781    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:49:56.352980    4712 ssh_runner.go:195] Run: which lz4
	I0501 02:49:56.361434    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 02:49:56.376111    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 02:49:56.383203    4712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 02:49:56.383203    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 02:49:58.545920    4712 docker.go:649] duration metric: took 2.1838816s to copy over tarball
	I0501 02:49:58.559153    4712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 02:50:07.024882    4712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4656661s)
	I0501 02:50:07.024882    4712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 02:50:07.091273    4712 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 02:50:07.117701    4712 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 02:50:07.169927    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:07.413870    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:50:10.777827    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.363932s)
	I0501 02:50:10.787955    4712 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 02:50:10.813130    4712 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 02:50:10.813237    4712 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:50:10.813237    4712 kubeadm.go:928] updating node { 172.28.217.218 8443 v1.30.0 docker true true} ...
	I0501 02:50:10.813471    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.217.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:50:10.824528    4712 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 02:50:10.865306    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:10.865306    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:10.865306    4712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:50:10.865306    4712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.217.218 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-136200 NodeName:ha-136200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.217.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.217.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:50:10.866013    4712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.217.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-136200"
	  kubeletExtraArgs:
	    node-ip: 172.28.217.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.217.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:50:10.866164    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:50:10.879856    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:50:10.916330    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:50:10.916590    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:50:10.930144    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:50:10.946847    4712 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:50:10.960617    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0501 02:50:10.980126    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0501 02:50:11.015010    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:50:11.046356    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0501 02:50:11.090122    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0501 02:50:11.151082    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:50:11.158193    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:50:11.198290    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:50:11.421704    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:50:11.457294    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.217.218
	I0501 02:50:11.457383    4712 certs.go:194] generating shared ca certs ...
	I0501 02:50:11.457383    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.458373    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:50:11.458865    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:50:11.459136    4712 certs.go:256] generating profile certs ...
	I0501 02:50:11.459821    4712 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:50:11.459950    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt with IP's: []
	I0501 02:50:11.600094    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt ...
	I0501 02:50:11.600094    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.crt: {Name:mkd5e4d205a603f84158daca3df4537a47f4507f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.601407    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key ...
	I0501 02:50:11.601407    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key: {Name:mk0f41aeab078751e43122e83e5a087fadc50acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.602800    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6
	I0501 02:50:11.602800    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.223.254]
	I0501 02:50:11.735634    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 ...
	I0501 02:50:11.735634    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6: {Name:mk25daf93f931731761fc26133f1d14b1615ea6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.736662    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 ...
	I0501 02:50:11.736662    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6: {Name:mk2e8ec633a20ca6bf6f004cdd1aa2dc02923e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.738036    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:50:11.750002    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.b080b0c6 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:50:11.751999    4712 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:50:11.751999    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt with IP's: []
	I0501 02:50:11.858892    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt ...
	I0501 02:50:11.858892    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt: {Name:mk545c7bac57fe0475c68dabf35cf7726f7ba6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.860058    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key ...
	I0501 02:50:11.860058    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key: {Name:mk197c02f3ddea53477a395060c41fac8b486d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:11.861502    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:50:11.862042    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:50:11.862321    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:50:11.862467    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:50:11.872340    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:50:11.872340    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:50:11.873220    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:50:11.873220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:50:11.874220    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:50:11.874220    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:50:11.875212    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:11.877013    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:50:11.928037    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:50:11.975033    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:50:12.024768    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:50:12.069813    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:50:12.117563    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:50:12.166940    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:50:12.214744    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:50:12.264780    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:50:12.314494    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:50:12.357210    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:50:12.407402    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:50:12.460345    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:50:12.486641    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:50:12.524534    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.531940    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.545887    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:50:12.569538    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:50:12.603111    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:50:12.640545    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.648489    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.664745    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:50:12.689236    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:50:12.722220    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:50:12.763152    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.771274    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.785811    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:50:12.809601    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:50:12.843815    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:50:12.851182    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:50:12.851596    4712 kubeadm.go:391] StartCluster: {Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:50:12.861439    4712 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 02:50:12.897822    4712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 02:50:12.930863    4712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:50:12.967142    4712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:50:12.989079    4712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 02:50:12.989174    4712 kubeadm.go:156] found existing configuration files:
	
	I0501 02:50:13.002144    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 02:50:13.022983    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 02:50:13.037263    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 02:50:13.070061    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 02:50:13.088170    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 02:50:13.104788    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 02:50:13.142331    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.161295    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 02:50:13.176372    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:50:13.217242    4712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 02:50:13.236623    4712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 02:50:13.250242    4712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:50:13.273719    4712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 02:50:13.796086    4712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 02:50:29.771938    4712 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 02:50:29.771938    4712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 02:50:29.772562    4712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 02:50:29.772731    4712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 02:50:29.772731    4712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 02:50:29.775841    4712 out.go:204]   - Generating certificates and keys ...
	I0501 02:50:29.775841    4712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 02:50:29.776550    4712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 02:50:29.776704    4712 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 02:50:29.776918    4712 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 02:50:29.777081    4712 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.777278    4712 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 02:50:29.777841    4712 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-136200 localhost] and IPs [172.28.217.218 127.0.0.1 ::1]
	I0501 02:50:29.778067    4712 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 02:50:29.778150    4712 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 02:50:29.778250    4712 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 02:50:29.778341    4712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 02:50:29.778421    4712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 02:50:29.778724    4712 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 02:50:29.778804    4712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 02:50:29.778987    4712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 02:50:29.779083    4712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 02:50:29.779174    4712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 02:50:29.779418    4712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 02:50:29.781433    4712 out.go:204]   - Booting up control plane ...
	I0501 02:50:29.781433    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 02:50:29.781986    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 02:50:29.782154    4712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 02:50:29.782509    4712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 02:50:29.782778    4712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 02:50:29.782833    4712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 02:50:29.783188    4712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 02:50:29.783366    4712 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 02:50:29.783611    4712 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.012148578s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [api-check] The API server is healthy after 9.161500426s
	I0501 02:50:29.783792    4712 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 02:50:29.784343    4712 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 02:50:29.784449    4712 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 02:50:29.784907    4712 kubeadm.go:309] [mark-control-plane] Marking the node ha-136200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 02:50:29.785014    4712 kubeadm.go:309] [bootstrap-token] Using token: bebbcj.jj3pub0bsd9tcu95
	I0501 02:50:29.789897    4712 out.go:204]   - Configuring RBAC rules ...
	I0501 02:50:29.789897    4712 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 02:50:29.790579    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 02:50:29.791324    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 02:50:29.791589    4712 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 02:50:29.791711    4712 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 02:50:29.791958    4712 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 02:50:29.791958    4712 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.791958    4712 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 02:50:29.791958    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 02:50:29.792580    4712 kubeadm.go:309] 
	I0501 02:50:29.792580    4712 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 02:50:29.793244    4712 kubeadm.go:309] 
	I0501 02:50:29.793244    4712 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 02:50:29.793244    4712 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 02:50:29.793244    4712 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 02:50:29.793868    4712 kubeadm.go:309] 
	I0501 02:50:29.794174    4712 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 02:50:29.794395    4712 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 02:50:29.794428    4712 kubeadm.go:309] 
	I0501 02:50:29.794531    4712 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 02:50:29.794720    4712 kubeadm.go:309] 	--control-plane 
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 02:50:29.794720    4712 kubeadm.go:309] 
	I0501 02:50:29.794720    4712 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bebbcj.jj3pub0bsd9tcu95 \
	I0501 02:50:29.795522    4712 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 02:50:29.795582    4712 cni.go:84] Creating CNI manager for ""
	I0501 02:50:29.795642    4712 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 02:50:29.798321    4712 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 02:50:29.815739    4712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 02:50:29.823882    4712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 02:50:29.823882    4712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 02:50:29.880076    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 02:50:30.703674    4712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200 minikube.k8s.io/updated_at=2024_05_01T02_50_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=true
	I0501 02:50:30.720641    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:30.736553    4712 ops.go:34] apiserver oom_adj: -16
	I0501 02:50:30.914646    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.422356    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:31.924569    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.422489    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:32.916374    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.419951    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:33.922300    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.426730    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:34.915815    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.415601    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:35.917473    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.419572    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:36.923752    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.424859    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:37.926096    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.415957    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:38.915894    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.417286    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:39.917110    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.418538    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:40.919363    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.420336    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:41.914423    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 02:50:42.068730    4712 kubeadm.go:1107] duration metric: took 11.364941s to wait for elevateKubeSystemPrivileges
	W0501 02:50:42.068870    4712 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 02:50:42.068934    4712 kubeadm.go:393] duration metric: took 29.2171223s to StartCluster
	I0501 02:50:42.069035    4712 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.069065    4712 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:42.070934    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:50:42.072021    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 02:50:42.072021    4712 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:42.072021    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:50:42.072021    4712 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:50:42.072021    4712 addons.go:69] Setting storage-provisioner=true in profile "ha-136200"
	I0501 02:50:42.072578    4712 addons.go:234] Setting addon storage-provisioner=true in "ha-136200"
	I0501 02:50:42.072715    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:42.072765    4712 addons.go:69] Setting default-storageclass=true in profile "ha-136200"
	I0501 02:50:42.072820    4712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-136200"
	I0501 02:50:42.073003    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:42.073773    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.074332    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:42.237653    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 02:50:42.682536    4712 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.322881    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.325924    4712 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:50:44.323327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:44.325924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:44.328653    4712 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:44.328653    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:50:44.328653    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:44.329300    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:50:44.330211    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 02:50:44.331266    4712 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 02:50:44.331692    4712 addons.go:234] Setting addon default-storageclass=true in "ha-136200"
	I0501 02:50:44.331692    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:50:44.332839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.572964    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.573962    4712 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:46.573962    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:50:46.573962    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:50:46.693061    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:46.693131    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:46.693256    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:50:48.834494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:48.834701    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:49.380882    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:49.381777    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:49.540602    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:50:51.474264    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:51.475208    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:50:51.629340    4712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:50:51.811276    4712 round_trippers.go:463] GET https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 02:50:51.811902    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.811902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.811902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.826458    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:50:51.827458    4712 round_trippers.go:463] PUT https://172.28.223.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 02:50:51.827458    4712 round_trippers.go:469] Request Headers:
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Content-Type: application/json
	I0501 02:50:51.827458    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:50:51.827458    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:50:51.831221    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:50:51.834740    4712 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:50:51.838052    4712 addons.go:505] duration metric: took 9.7659586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:50:51.838052    4712 start.go:245] waiting for cluster config update ...
	I0501 02:50:51.838052    4712 start.go:254] writing updated cluster config ...
	I0501 02:50:51.841165    4712 out.go:177] 
	I0501 02:50:51.854479    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:50:51.854479    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.861940    4712 out.go:177] * Starting "ha-136200-m02" control-plane node in "ha-136200" cluster
	I0501 02:50:51.865640    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:50:51.865640    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:50:51.865640    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:50:51.866174    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:50:51.866462    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:50:51.868358    4712 start.go:360] acquireMachinesLock for ha-136200-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:50:51.868358    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m02"
	I0501 02:50:51.869005    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:50:51.869070    4712 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 02:50:51.871919    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:50:51.872184    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:50:51.872184    4712 client.go:168] LocalClient.Create starting
	I0501 02:50:51.872730    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:50:51.872991    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:50:53.846039    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:53.846893    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:55.665592    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:50:57.208535    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:50:57.208630    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:00.945176    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:00.949038    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:51:01.496342    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: Creating VM...
	I0501 02:51:02.272582    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:51:05.181880    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:05.182621    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:51:05.182621    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:51:07.021151    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:51:07.022208    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:07.022208    4712 main.go:141] libmachine: Creating VHD
	I0501 02:51:07.022261    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:51:10.800515    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F5C7D5B1-6A19-4B92-8073-0E024A878A77
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:51:10.800843    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:51:10.800925    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:51:10.813657    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:14.013099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:14.013713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd' -SizeBytes 20000MB
	I0501 02:51:16.613734    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:16.613973    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:16.614122    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:20.349642    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m02 -DynamicMemoryEnabled $false
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:22.595804    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:22.596839    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m02 -Count 2
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:24.783891    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\boot2docker.iso'
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:27.309419    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:27.310044    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\disk.vhd'
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:29.998833    4712 main.go:141] libmachine: Starting VM...
	I0501 02:51:29.998833    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m02
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:33.080959    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:51:33.080959    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:35.347158    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:35.348049    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:37.880551    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:37.881580    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:38.889792    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:41.091102    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:41.091533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:43.621201    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:43.621813    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:44.622350    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:46.859140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:49.413174    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:50.423751    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:52.633336    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:52.634051    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:51:55.225142    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:56.229253    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:51:58.424704    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:51:58.425395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:01.088984    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:01.089224    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:03.247035    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:03.247253    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:03.247291    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:52:03.247449    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:05.493082    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:05.493179    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:08.078374    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:08.085777    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:08.101463    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:08.101463    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:52:08.244557    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:52:08.244557    4712 buildroot.go:166] provisioning hostname "ha-136200-m02"
	I0501 02:52:08.244557    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:10.395193    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:10.396068    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:12.968300    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:12.975111    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:12.975111    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:12.975111    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m02 && echo "ha-136200-m02" | sudo tee /etc/hostname
	I0501 02:52:13.142328    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m02
	
	I0501 02:52:13.142479    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:15.318537    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:17.993085    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:17.993267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:18.000242    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:18.000687    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:18.000687    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:52:18.164116    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:52:18.164116    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:52:18.164235    4712 buildroot.go:174] setting up certificates
	I0501 02:52:18.164235    4712 provision.go:84] configureAuth start
	I0501 02:52:18.164235    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:20.323803    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:20.324816    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:20.324954    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:22.884982    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:25.037258    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:25.038231    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:25.038262    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:27.637529    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:27.638462    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:27.638462    4712 provision.go:143] copyHostCerts
	I0501 02:52:27.638661    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:52:27.638979    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:52:27.639093    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:52:27.639613    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:52:27.640827    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:52:27.641053    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:52:27.641053    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:52:27.642372    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:52:27.642643    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:52:27.642762    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:52:27.643264    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:52:27.644242    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m02 san=[127.0.0.1 172.28.213.142 ha-136200-m02 localhost minikube]
	I0501 02:52:27.843189    4712 provision.go:177] copyRemoteCerts
	I0501 02:52:27.855361    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:52:27.855361    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:29.952775    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:29.953607    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:32.549323    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:32.549356    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:32.549913    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:32.667202    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8118058s)
	I0501 02:52:32.667353    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:52:32.667882    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:52:32.721631    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:52:32.721631    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:52:32.771533    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:52:32.772177    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:52:32.825532    4712 provision.go:87] duration metric: took 14.6610374s to configureAuth
	I0501 02:52:32.825532    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:52:32.826094    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:52:32.826229    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:34.944371    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:34.945326    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:37.500533    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:37.500590    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:37.506891    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:37.507395    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:37.507476    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:52:37.655757    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:52:37.655757    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:52:37.655757    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:52:37.656297    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:39.802845    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:39.803012    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:42.365445    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:42.366335    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:42.372086    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:42.372086    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:42.372086    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:52:42.560633    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:52:42.560698    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:44.723552    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:44.724351    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:47.350624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:47.350694    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:47.356560    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:52:47.356887    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:52:47.356887    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:52:49.658910    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:52:49.658910    4712 machine.go:97] duration metric: took 46.4112065s to provisionDockerMachine
	I0501 02:52:49.659442    4712 client.go:171] duration metric: took 1m57.7858628s to LocalClient.Create
	I0501 02:52:49.659442    4712 start.go:167] duration metric: took 1m57.786395s to libmachine.API.Create "ha-136200"
	I0501 02:52:49.659503    4712 start.go:293] postStartSetup for "ha-136200-m02" (driver="hyperv")
	I0501 02:52:49.659537    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:52:49.675636    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:52:49.675636    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:51.837386    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:51.837492    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:54.474409    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:54.475041    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:54.475353    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:52:54.588525    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9128536s)
	I0501 02:52:54.605879    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:52:54.614578    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:52:54.614578    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:52:54.615019    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:52:54.615983    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:52:54.616061    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:52:54.630716    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:52:54.652380    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:52:54.707179    4712 start.go:296] duration metric: took 5.0475618s for postStartSetup
	I0501 02:52:54.709671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:52:56.857631    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:56.858662    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:52:59.468337    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:52:59.468783    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:52:59.468965    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:52:59.470910    4712 start.go:128] duration metric: took 2m7.6009059s to createHost
	I0501 02:52:59.471488    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:01.642267    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:01.642528    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:04.217977    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:04.224906    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:04.225471    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:04.225635    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:53:04.373720    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714531984.377348123
	
	I0501 02:53:04.373720    4712 fix.go:216] guest clock: 1714531984.377348123
	I0501 02:53:04.373720    4712 fix.go:229] Guest: 2024-05-01 02:53:04.377348123 +0000 UTC Remote: 2024-05-01 02:52:59.4709109 +0000 UTC m=+340.350757801 (delta=4.906437223s)
	I0501 02:53:04.373851    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:06.539924    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:06.540324    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:09.204905    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:09.211685    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:53:09.212163    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.142 22 <nil> <nil>}
	I0501 02:53:09.212163    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714531984
	I0501 02:53:09.386381    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:53:04 UTC 2024
	
	I0501 02:53:09.386381    4712 fix.go:236] clock set: Wed May  1 02:53:04 UTC 2024
	 (err=<nil>)
	I0501 02:53:09.386381    4712 start.go:83] releasing machines lock for "ha-136200-m02", held for 2m17.5170158s
	I0501 02:53:09.386381    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:11.545475    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:11.545938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:14.171918    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:14.175393    4712 out.go:177] * Found network options:
	I0501 02:53:14.178428    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.181305    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.183961    4712 out.go:177]   - NO_PROXY=172.28.217.218
	W0501 02:53:14.186016    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:53:14.186987    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:53:14.190185    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:53:14.190185    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:14.201210    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:53:14.201210    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m02 ).state
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.402596    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:16.404382    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:16.404922    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:19.202467    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.202936    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.203019    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.238045    4712 main.go:141] libmachine: [stdout =====>] : 172.28.213.142
	
	I0501 02:53:19.238494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:19.238494    4712 sshutil.go:53] new ssh client: &{IP:172.28.213.142 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m02\id_rsa Username:docker}
	I0501 02:53:19.303673    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1023631s)
	W0501 02:53:19.303730    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:53:19.322303    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:53:19.425813    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.234512s)
	I0501 02:53:19.425813    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:53:19.425869    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:19.426179    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:19.480110    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:53:19.516304    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:53:19.540429    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:53:19.554725    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:53:19.592793    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.638122    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:53:19.676636    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:53:19.716798    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:53:19.755079    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:53:19.792962    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:53:19.828507    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:53:19.864630    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:53:19.900003    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:53:19.933687    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:20.164043    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:53:20.200981    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:53:20.214486    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:53:20.252522    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.291404    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:53:20.342446    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:53:20.384719    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.433485    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:53:20.493558    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:53:20.521863    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:53:20.572266    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:53:20.592650    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:53:20.612894    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:53:20.662972    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:53:20.893661    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:53:21.103995    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:53:21.104092    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:53:21.153897    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:21.367769    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:53:23.926290    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5584356s)
	I0501 02:53:23.942886    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:53:23.985733    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.029327    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:53:24.262777    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:53:24.474412    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:24.701708    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:53:24.747995    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:53:24.789968    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:25.013627    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:53:25.132301    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:53:25.147412    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:53:25.161719    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:53:25.177972    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:53:25.198441    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:53:25.257309    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:53:25.270183    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.317675    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:53:25.366446    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:53:25.369267    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:53:25.371205    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:53:25.375182    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:53:25.380319    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:53:25.380407    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:53:25.393209    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:53:25.400057    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:25.423648    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:53:25.424611    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:53:25.425544    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:27.528822    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:27.528822    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:27.530295    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.213.142
	I0501 02:53:27.530371    4712 certs.go:194] generating shared ca certs ...
	I0501 02:53:27.530371    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.531276    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:53:27.531739    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:53:27.531846    4712 certs.go:256] generating profile certs ...
	I0501 02:53:27.532594    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:53:27.532748    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12
	I0501 02:53:27.532985    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.223.254]
	I0501 02:53:27.709722    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 ...
	I0501 02:53:27.709722    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12: {Name:mk2a82749362965014fb3e2d8d662f7a4e7e9cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.711740    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 ...
	I0501 02:53:27.711740    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12: {Name:mkb73c4ed44f1dd1b8f82d46a1302578ac6f367b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:53:27.712120    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:53:27.726267    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.e4130e12 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:53:27.727349    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:53:27.727349    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:53:27.728383    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:53:27.728582    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:53:27.728825    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:53:27.729015    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:53:27.729253    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:53:27.729653    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:53:27.729899    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:53:27.730538    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:53:27.730538    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:53:27.730927    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:53:27.731437    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:53:27.731866    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:53:27.732310    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:53:27.732905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:27.733131    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:53:27.733384    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:53:27.733671    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:29.906327    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:29.906678    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:32.469869    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:32.470407    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:32.580880    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:53:32.588963    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:53:32.624993    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:53:32.635801    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:53:32.670832    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:53:32.678812    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:53:32.713791    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:53:32.721308    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:53:32.760244    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:53:32.767565    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:53:32.804387    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:53:32.811905    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:53:32.832394    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:53:32.885891    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:53:32.936137    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:53:32.994824    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:53:33.054042    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0501 02:53:33.105998    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:53:33.156026    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:53:33.205426    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:53:33.264385    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:53:33.316776    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:53:33.368293    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:53:33.420965    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:53:33.458001    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:53:33.499072    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:53:33.534603    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:53:33.570373    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:53:33.602430    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:53:33.635495    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:53:33.684802    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:53:33.709070    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:53:33.743711    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.750970    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.765746    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:53:33.787709    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:53:33.828429    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:53:33.866546    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.874255    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.888580    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:53:33.910501    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:53:33.948720    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:53:33.993042    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.001989    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.015762    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:53:34.040058    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:53:34.077501    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:53:34.086036    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:53:34.086573    4712 kubeadm.go:928] updating node {m02 172.28.213.142 8443 v1.30.0 docker true true} ...
	I0501 02:53:34.086726    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.213.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:53:34.086726    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:53:34.101653    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:53:34.130866    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:53:34.131029    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:53:34.145238    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.165400    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:53:34.180369    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0501 02:53:34.204849    4712 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0501 02:53:35.468257    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.481254    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:53:35.488247    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:53:35.489247    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:53:35.546630    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.559624    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:53:35.626048    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:53:35.627145    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:53:36.028150    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:53:36.077312    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.090870    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:53:36.109939    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:53:36.111871    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:53:36.821139    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:53:36.843821    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0501 02:53:36.878070    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:53:36.917707    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:53:36.971960    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:53:36.979482    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:53:37.020702    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:53:37.250249    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:53:37.282989    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:53:37.299000    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:53:37.299000    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:53:37.299000    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:53:39.432833    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:39.433070    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:53:42.011853    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:53:42.012855    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:53:42.240815    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9416996s)
	I0501 02:53:42.240889    4712 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:53:42.240889    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443"
	I0501 02:54:27.807891    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ig07su.dw1rkx9dngecbwrb --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m02 --control-plane --apiserver-advertise-address=172.28.213.142 --apiserver-bind-port=8443": (45.5666728s)
	I0501 02:54:27.808014    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:54:28.660694    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m02 minikube.k8s.io/updated_at=2024_05_01T02_54_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:54:28.861404    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:54:29.035785    4712 start.go:318] duration metric: took 51.7364106s to joinCluster
	I0501 02:54:29.035979    4712 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:29.038999    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:54:29.036818    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:29.055991    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:54:29.482004    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:54:29.532870    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:54:29.534181    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:54:29.534386    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:54:29.535958    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:29.536236    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:29.536236    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:29.536236    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:29.536353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:29.609745    4712 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0501 02:54:30.045557    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.045557    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.045557    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.045557    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.051535    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:30.542020    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:30.542083    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:30.542148    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:30.542148    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:30.549047    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.050630    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.050694    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.050694    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.050694    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.063209    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:31.542025    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:31.542098    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:31.542098    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:31.542098    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:31.548667    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:31.549663    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:32.050097    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.050097    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.050174    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.050174    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.054568    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:32.542017    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:32.542017    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:32.542017    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:32.542017    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:32.546488    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:33.050866    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.050866    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.050866    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.050866    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.056451    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:33.538033    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:33.538033    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:33.538033    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:33.538033    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:33.713541    4712 round_trippers.go:574] Response Status: 200 OK in 175 milliseconds
	I0501 02:54:33.714615    4712 node_ready.go:53] node "ha-136200-m02" has status "Ready":"False"
	I0501 02:54:34.041226    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.041226    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.041226    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.041226    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.047903    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:34.547454    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:34.547454    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:34.547757    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:34.547757    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:34.552099    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.046877    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.046877    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.046877    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.046877    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.052278    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:35.548463    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.548463    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.548740    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.548740    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.558660    4712 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 02:54:35.560213    4712 node_ready.go:49] node "ha-136200-m02" has status "Ready":"True"
	I0501 02:54:35.560213    4712 node_ready.go:38] duration metric: took 6.0241453s for node "ha-136200-m02" to be "Ready" ...
	I0501 02:54:35.560332    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:35.560422    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:35.560422    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.560422    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.560422    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.572050    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:54:35.581777    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.581924    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:54:35.581924    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.581924    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.581924    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.585770    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:35.587608    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.587685    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.587685    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.587685    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.591867    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.591867    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.591867    4712 pod_ready.go:81] duration metric: took 10.0903ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.591867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:54:35.591867    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.591867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.591867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.596249    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.597880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.597964    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.597964    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.597964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.602327    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:35.603007    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.603007    4712 pod_ready.go:81] duration metric: took 11.1397ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.603007    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.604166    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:54:35.604211    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.604211    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.604211    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.610508    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:35.611831    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:35.611831    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.611831    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.611831    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.627921    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:54:35.629498    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:35.629498    4712 pod_ready.go:81] duration metric: took 26.4906ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:35.629498    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:35.629498    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.629498    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.629498    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.638393    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:35.638911    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:35.638911    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:35.638911    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:35.639550    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:35.643473    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:36.140037    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.140167    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.140167    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.140167    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.148123    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:36.149580    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.149580    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.149659    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.149659    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.155530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:36.644340    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:36.644340    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.644340    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.644340    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.651321    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:36.652588    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:36.653128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:36.653128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:36.653128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:36.660377    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.144534    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.144656    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.144656    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.144656    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.150598    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:37.152092    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.152665    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.152665    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.152665    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.160441    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:37.644104    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:37.644239    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.644239    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.644239    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.649836    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.650604    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:37.650671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:37.650671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:37.650671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:37.654947    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:37.656164    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:38.142505    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.142505    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.142505    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.142505    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.149100    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.151258    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.151347    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.151347    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.151347    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.155677    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:38.643186    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:38.643241    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.643241    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.643241    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.650578    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:38.651873    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:38.651902    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:38.651902    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:38.651902    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:38.655946    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.142681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.142681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.142681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.142681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.148315    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:39.149953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.150203    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.150203    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.150203    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.154771    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:39.643364    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:39.643364    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.643364    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.643364    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.649703    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:39.650947    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:39.650947    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:39.651009    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:39.651009    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:39.654949    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:39.656190    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:40.142428    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.142428    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.142676    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.142676    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.148562    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.149792    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.149792    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.149792    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.149792    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.154808    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:40.644095    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:40.644095    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.644095    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.644095    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.650441    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:40.651544    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:40.651598    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:40.651598    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:40.651598    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:40.662172    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:41.143094    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.143187    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.143187    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.143187    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.148870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.150018    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.150018    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.150018    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.150018    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.156810    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:54:41.640508    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:41.640624    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.640624    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.640624    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.646018    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:41.646730    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:41.647318    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:41.647318    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:41.647318    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:41.652880    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.139900    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.139985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.139985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.139985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.145577    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:42.146383    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.146383    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.146448    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.146448    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.151141    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.151862    4712 pod_ready.go:102] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"False"
	I0501 02:54:42.639271    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:42.639271    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.639271    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.639271    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.642318    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:42.646671    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:42.646671    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:42.646671    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:42.646671    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:42.651360    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.137151    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.137496    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.137496    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.137496    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.141750    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.142959    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.142959    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.142959    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.142959    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.147560    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.641950    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:54:43.641985    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.641985    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.641985    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.647599    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.649299    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.649350    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.649350    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.649350    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.657034    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:43.658043    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.658043    4712 pod_ready.go:81] duration metric: took 8.0284866s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.658043    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:54:43.658043    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.658043    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.658043    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.664394    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.664394    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.664394    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.664394    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.668848    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.669848    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.669848    4712 pod_ready.go:81] duration metric: took 11.805ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.669848    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:54:43.669848    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.669848    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.670830    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.674754    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:54:43.676699    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.676699    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.676699    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.676699    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.681632    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.683231    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.683231    4712 pod_ready.go:81] duration metric: took 13.3825ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683231    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.683412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:54:43.683412    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.683412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.683412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.688589    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.690255    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:43.690255    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.690325    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.690325    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.695853    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:43.696818    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.696860    4712 pod_ready.go:81] duration metric: took 13.6296ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696912    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.696993    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:54:43.697029    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.697029    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.697029    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.701912    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:43.703032    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:43.703736    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.703736    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.703736    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.706383    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:54:43.707734    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:43.707824    4712 pod_ready.go:81] duration metric: took 10.9115ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.707824    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:43.845210    4712 request.go:629] Waited for 137.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:54:43.845681    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:43.845681    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:43.845681    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:43.851000    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.047046    4712 request.go:629] Waited for 194.7517ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.047200    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.047200    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.047200    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.052548    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.053735    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.053735    4712 pod_ready.go:81] duration metric: took 345.9086ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.053735    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.250128    4712 request.go:629] Waited for 196.1147ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:54:44.250128    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.250128    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.250128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.254761    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:44.456435    4712 request.go:629] Waited for 200.6839ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:44.456435    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.456435    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.456435    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.461480    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.462518    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.462578    4712 pod_ready.go:81] duration metric: took 408.7057ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.462578    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.648779    4712 request.go:629] Waited for 185.8104ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:54:44.648953    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.648953    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.649128    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.654457    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.855621    4712 request.go:629] Waited for 199.4812ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:54:44.855706    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:44.855706    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:44.855706    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:44.861147    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:44.861147    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:44.861699    4712 pod_ready.go:81] duration metric: took 399.1179ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:44.861778    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.042766    4712 request.go:629] Waited for 180.9309ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:54:45.042766    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.042766    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.042766    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.047379    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:54:45.244553    4712 request.go:629] Waited for 197.0101ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:54:45.244553    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.244553    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.244553    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.250870    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:54:45.252485    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:54:45.252485    4712 pod_ready.go:81] duration metric: took 390.7033ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:54:45.252547    4712 pod_ready.go:38] duration metric: took 9.6921442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:54:45.252619    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:54:45.266607    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:45.298538    4712 api_server.go:72] duration metric: took 16.2624407s to wait for apiserver process to appear ...
	I0501 02:54:45.298538    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:54:45.298642    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:54:45.308804    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:54:45.308804    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:54:45.308804    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.308804    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.308804    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.310764    4712 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 02:54:45.311165    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:54:45.311238    4712 api_server.go:131] duration metric: took 12.7003ms to wait for apiserver health ...
	I0501 02:54:45.311238    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:54:45.446869    4712 request.go:629] Waited for 135.3903ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.446869    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.446869    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.446869    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.455463    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:54:45.466055    4712 system_pods.go:59] 17 kube-system pods found
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.466055    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.466055    4712 system_pods.go:74] duration metric: took 154.8157ms to wait for pod list to return data ...
	I0501 02:54:45.466055    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:54:45.650374    4712 request.go:629] Waited for 183.8749ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:54:45.650461    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.650566    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.650566    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.661544    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:54:45.662734    4712 default_sa.go:45] found service account: "default"
	I0501 02:54:45.662869    4712 default_sa.go:55] duration metric: took 196.812ms for default service account to be created ...
	I0501 02:54:45.662869    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:54:45.853192    4712 request.go:629] Waited for 189.9269ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:54:45.853192    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:45.853419    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:45.853419    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:45.865601    4712 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 02:54:45.872777    4712 system_pods.go:86] 17 kube-system pods found
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:54:45.872777    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:54:45.873359    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:54:45.873383    4712 system_pods.go:126] duration metric: took 210.5126ms to wait for k8s-apps to be running ...
	I0501 02:54:45.873383    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:54:45.886040    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:45.914966    4712 system_svc.go:56] duration metric: took 41.5829ms WaitForService to wait for kubelet
	I0501 02:54:45.915054    4712 kubeadm.go:576] duration metric: took 16.8789526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:54:45.915054    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:54:46.043164    4712 request.go:629] Waited for 127.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:54:46.043164    4712 round_trippers.go:469] Request Headers:
	I0501 02:54:46.043164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:54:46.043310    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:54:46.050320    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:54:46.051501    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:54:46.051501    4712 node_conditions.go:105] duration metric: took 136.4457ms to run NodePressure ...
	I0501 02:54:46.051501    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:54:46.051501    4712 start.go:254] writing updated cluster config ...
	I0501 02:54:46.055981    4712 out.go:177] 
	I0501 02:54:46.073210    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:54:46.073681    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.079155    4712 out.go:177] * Starting "ha-136200-m03" control-plane node in "ha-136200" cluster
	I0501 02:54:46.082550    4712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:54:46.082550    4712 cache.go:56] Caching tarball of preloaded images
	I0501 02:54:46.083028    4712 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:54:46.083223    4712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 02:54:46.083384    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:54:46.091748    4712 start.go:360] acquireMachinesLock for ha-136200-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:54:46.091748    4712 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-136200-m03"
	I0501 02:54:46.091748    4712 start.go:93] Provisioning new machine with config: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:54:46.091748    4712 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0501 02:54:46.099863    4712 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 02:54:46.100178    4712 start.go:159] libmachine.API.Create for "ha-136200" (driver="hyperv")
	I0501 02:54:46.100178    4712 client.go:168] LocalClient.Create starting
	I0501 02:54:46.100178    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.100824    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101128    4712 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Decoding PEM data...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: Parsing certificate...
	I0501 02:54:46.101380    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:48.122930    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 02:54:49.970242    4712 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:49.971128    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:54:51.553112    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:51.553966    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:55.355693    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:55.358013    4712 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 02:54:55.879042    4712 main.go:141] libmachine: Creating SSH key...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: Creating VM...
	I0501 02:54:55.991258    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 02:54:58.933270    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:54:58.933270    4712 main.go:141] libmachine: Using switch "Default Switch"
	I0501 02:54:58.933728    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 02:55:00.789675    4712 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:00.789938    4712 main.go:141] libmachine: Creating VHD
	I0501 02:55:00.789938    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 02:55:04.583967    4712 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AAB86B48-3D75-4842-8FF8-3BDEC4AB86C2
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 02:55:04.584134    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing magic tar header
	I0501 02:55:04.584192    4712 main.go:141] libmachine: Writing SSH key tar header
	I0501 02:55:04.594277    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:07.812902    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd' -SizeBytes 20000MB
	I0501 02:55:10.391210    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:10.391245    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:10.391352    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-136200-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 02:55:14.151278    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:14.151882    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-136200-m03 -DynamicMemoryEnabled $false
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:16.476957    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:16.478022    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-136200-m03 -Count 2
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:18.717259    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:18.717850    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\boot2docker.iso'
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:21.310252    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-136200-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\disk.vhd'
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:24.025209    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:24.025533    4712 main.go:141] libmachine: Starting VM...
	I0501 02:55:24.025533    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-136200-m03
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:27.131510    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:27.131722    4712 main.go:141] libmachine: Waiting for host to start...
	I0501 02:55:27.131722    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:29.452098    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:29.453035    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:29.453089    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:32.025441    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:32.026234    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:33.036612    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:35.273538    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:37.849230    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:37.849355    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:38.854379    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:41.083466    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:43.607622    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:44.621333    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:46.858272    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:49.475402    4712 main.go:141] libmachine: [stdout =====>] : 
	I0501 02:55:49.476316    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:50.480573    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:52.723494    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:52.724713    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:55:55.378897    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:55.379189    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:57.536029    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:57.536246    4712 machine.go:94] provisionDockerMachine start ...
	I0501 02:55:57.536246    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:55:59.681292    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:55:59.681842    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:55:59.682021    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:02.296390    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:02.302435    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:02.303031    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:02.303031    4712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:56:02.440858    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 02:56:02.440919    4712 buildroot.go:166] provisioning hostname "ha-136200-m03"
	I0501 02:56:02.440919    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:04.540210    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:04.541126    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:07.111624    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:07.118513    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:07.119097    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:07.119097    4712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-136200-m03 && echo "ha-136200-m03" | sudo tee /etc/hostname
	I0501 02:56:07.274395    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-136200-m03
	
	I0501 02:56:07.274395    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:09.427222    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:09.427413    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:12.066151    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:12.066558    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:12.072701    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:12.073263    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:12.073263    4712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-136200-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-136200-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-136200-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:56:12.226572    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:56:12.226572    4712 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 02:56:12.226572    4712 buildroot.go:174] setting up certificates
	I0501 02:56:12.226572    4712 provision.go:84] configureAuth start
	I0501 02:56:12.226572    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:14.383697    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:14.383832    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:14.383916    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:17.017056    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:17.017236    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:19.246383    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:19.247269    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:21.887343    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:21.887343    4712 provision.go:143] copyHostCerts
	I0501 02:56:21.887688    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 02:56:21.887688    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 02:56:21.887688    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 02:56:21.888470    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 02:56:21.889606    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 02:56:21.890069    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 02:56:21.890132    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 02:56:21.890553    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 02:56:21.891611    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 02:56:21.891800    4712 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 02:56:21.891800    4712 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 02:56:21.892337    4712 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 02:56:21.893162    4712 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-136200-m03 san=[127.0.0.1 172.28.216.62 ha-136200-m03 localhost minikube]
	I0501 02:56:21.973101    4712 provision.go:177] copyRemoteCerts
	I0501 02:56:21.993116    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:56:21.993116    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:24.169668    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:24.170031    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:26.830749    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:26.831099    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:26.831162    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:26.935784    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9426327s)
	I0501 02:56:26.935784    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 02:56:26.936266    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:56:26.985792    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 02:56:26.986191    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0501 02:56:27.035460    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 02:56:27.036450    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 02:56:27.092775    4712 provision.go:87] duration metric: took 14.8660953s to configureAuth
	I0501 02:56:27.092775    4712 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:56:27.093873    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:56:27.094011    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:29.214442    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:29.214910    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:31.848020    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:31.848124    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:31.859047    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:31.859047    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:31.859047    4712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 02:56:31.983811    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 02:56:31.983936    4712 buildroot.go:70] root file system type: tmpfs
	I0501 02:56:31.984160    4712 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 02:56:31.984160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:34.146679    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:34.146837    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:36.793925    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:36.794747    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:36.801153    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:36.801782    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:36.801782    4712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.217.218"
	Environment="NO_PROXY=172.28.217.218,172.28.213.142"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 02:56:36.960579    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.217.218
	Environment=NO_PROXY=172.28.217.218,172.28.213.142
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 02:56:36.960579    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:39.141157    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:39.141800    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:41.765565    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:41.766216    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:41.774239    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:41.774411    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:41.774411    4712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 02:56:43.994230    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 02:56:43.994313    4712 machine.go:97] duration metric: took 46.4577313s to provisionDockerMachine
	I0501 02:56:43.994313    4712 client.go:171] duration metric: took 1m57.8932783s to LocalClient.Create
	I0501 02:56:43.994313    4712 start.go:167] duration metric: took 1m57.8932783s to libmachine.API.Create "ha-136200"
	I0501 02:56:43.994428    4712 start.go:293] postStartSetup for "ha-136200-m03" (driver="hyperv")
	I0501 02:56:43.994473    4712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:56:44.010383    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:56:44.010383    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:46.225048    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:46.225772    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:46.225844    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:48.918999    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:48.919679    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:56:49.032380    4712 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0219067s)
	I0501 02:56:49.045700    4712 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:56:49.054180    4712 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:56:49.054180    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 02:56:49.054700    4712 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 02:56:49.055002    4712 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 02:56:49.055574    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 02:56:49.071048    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 02:56:49.092423    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 02:56:49.143151    4712 start.go:296] duration metric: took 5.1486851s for postStartSetup
	I0501 02:56:49.146034    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:51.349851    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:51.350067    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:51.350153    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:54.016657    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:54.017149    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:54.017380    4712 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\config.json ...
	I0501 02:56:54.019460    4712 start.go:128] duration metric: took 2m7.9267809s to createHost
	I0501 02:56:54.019460    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:56.211168    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:56:58.811673    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:56:58.818618    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:56:58.819348    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:56:58.819348    4712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:56:58.949732    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714532218.937413126
	
	I0501 02:56:58.949732    4712 fix.go:216] guest clock: 1714532218.937413126
	I0501 02:56:58.949732    4712 fix.go:229] Guest: 2024-05-01 02:56:58.937413126 +0000 UTC Remote: 2024-05-01 02:56:54.0194605 +0000 UTC m=+574.897601601 (delta=4.917952626s)
	I0501 02:56:58.949941    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:01.095786    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:01.096436    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:03.649884    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:03.657161    4712 main.go:141] libmachine: Using SSH client type: native
	I0501 02:57:03.657803    4712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.216.62 22 <nil> <nil>}
	I0501 02:57:03.657803    4712 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714532218
	I0501 02:57:03.807080    4712 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 02:56:58 UTC 2024
	
	I0501 02:57:03.807174    4712 fix.go:236] clock set: Wed May  1 02:56:58 UTC 2024
	 (err=<nil>)
	I0501 02:57:03.807174    4712 start.go:83] releasing machines lock for "ha-136200-m03", held for 2m17.7144231s
	I0501 02:57:03.807438    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:05.979339    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:08.602379    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:08.605250    4712 out.go:177] * Found network options:
	I0501 02:57:08.607292    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.610080    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.612307    4712 out.go:177]   - NO_PROXY=172.28.217.218,172.28.213.142
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.614962    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 02:57:08.616207    4712 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 02:57:08.619160    4712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:57:08.619160    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:08.631565    4712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 02:57:08.631565    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200-m03 ).state
	I0501 02:57:10.838360    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838735    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:10.838874    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:10.838934    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200-m03 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.624235    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.624235    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.648439    4712 main.go:141] libmachine: [stdout =====>] : 172.28.216.62
	
	I0501 02:57:13.648490    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:13.648768    4712 sshutil.go:53] new ssh client: &{IP:172.28.216.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200-m03\id_rsa Username:docker}
	I0501 02:57:13.732596    4712 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1009937s)
	W0501 02:57:13.732596    4712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:57:13.748662    4712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:57:13.811529    4712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 02:57:13.811529    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:13.811529    4712 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1923313s)
	I0501 02:57:13.812665    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:13.867675    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:57:13.906069    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:57:13.929632    4712 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:57:13.947027    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:57:13.986248    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.024920    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:57:14.061978    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:57:14.099821    4712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:57:14.138543    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:57:14.181270    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:57:14.217808    4712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:57:14.261794    4712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:57:14.297051    4712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:57:14.332222    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:14.558529    4712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:57:14.595594    4712 start.go:494] detecting cgroup driver to use...
	I0501 02:57:14.610122    4712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 02:57:14.650440    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.689246    4712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 02:57:14.740013    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 02:57:14.780524    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.822987    4712 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:57:14.889904    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:57:14.919061    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:57:14.983590    4712 ssh_runner.go:195] Run: which cri-dockerd
	I0501 02:57:15.008856    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 02:57:15.032703    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 02:57:15.086991    4712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 02:57:15.324922    4712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 02:57:15.542551    4712 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 02:57:15.542551    4712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 02:57:15.594658    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:15.826063    4712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 02:57:18.399291    4712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5732092s)
	I0501 02:57:18.412657    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 02:57:18.452282    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:18.491033    4712 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 02:57:18.702768    4712 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 02:57:18.928695    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.145438    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 02:57:19.199070    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 02:57:19.242280    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:19.475811    4712 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 02:57:19.598548    4712 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 02:57:19.612590    4712 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 02:57:19.624279    4712 start.go:562] Will wait 60s for crictl version
	I0501 02:57:19.637235    4712 ssh_runner.go:195] Run: which crictl
	I0501 02:57:19.657683    4712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:57:19.721351    4712 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 02:57:19.734095    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.784976    4712 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 02:57:19.822576    4712 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 02:57:19.826041    4712 out.go:177]   - env NO_PROXY=172.28.217.218
	I0501 02:57:19.827741    4712 out.go:177]   - env NO_PROXY=172.28.217.218,172.28.213.142
	I0501 02:57:19.831635    4712 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 02:57:19.835639    4712 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 02:57:19.838638    4712 ip.go:210] interface addr: 172.28.208.1/20
	I0501 02:57:19.851676    4712 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 02:57:19.858242    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:19.883254    4712 mustload.go:65] Loading cluster: ha-136200
	I0501 02:57:19.883656    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:57:19.884140    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:22.018331    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:22.018592    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:22.018658    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:22.019393    4712 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200 for IP: 172.28.216.62
	I0501 02:57:22.019393    4712 certs.go:194] generating shared ca certs ...
	I0501 02:57:22.019393    4712 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.020318    4712 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 02:57:22.020786    4712 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 02:57:22.021028    4712 certs.go:256] generating profile certs ...
	I0501 02:57:22.021028    4712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\client.key
	I0501 02:57:22.021606    4712 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9
	I0501 02:57:22.021767    4712 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.217.218 172.28.213.142 172.28.216.62 172.28.223.254]
	I0501 02:57:22.149544    4712 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 ...
	I0501 02:57:22.149544    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9: {Name:mk4837fbdb29e34df2c44991c600cda784a93d5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.150373    4712 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 ...
	I0501 02:57:22.150373    4712 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9: {Name:mkcff5432d26e17c25cf2a9709eb4553a31e99c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:57:22.152472    4712 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt
	I0501 02:57:22.165924    4712 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key.cbcfb2e9 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key
	I0501 02:57:22.166444    4712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key
	I0501 02:57:22.166444    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 02:57:22.167623    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 02:57:22.167772    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 02:57:22.168122    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 02:57:22.168280    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 02:57:22.168464    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 02:57:22.169490    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 02:57:22.169490    4712 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 02:57:22.170595    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 02:57:22.170869    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 02:57:22.171164    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 02:57:22.171434    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 02:57:22.171670    4712 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:22.172286    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 02:57:22.172911    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:24.374168    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:24.374904    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:26.980450    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:26.980450    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:27.093857    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0501 02:57:27.102183    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0501 02:57:27.141690    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0501 02:57:27.150194    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0501 02:57:27.193806    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0501 02:57:27.202957    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0501 02:57:27.254044    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0501 02:57:27.262605    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0501 02:57:27.303214    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0501 02:57:27.310453    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0501 02:57:27.348966    4712 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0501 02:57:27.356382    4712 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0501 02:57:27.383468    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:57:27.437872    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 02:57:27.494095    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:57:27.544977    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 02:57:27.599083    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0501 02:57:27.652123    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0501 02:57:27.710792    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:57:27.766379    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-136200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:57:27.817284    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 02:57:27.867949    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:57:27.930560    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 02:57:27.987875    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0501 02:57:28.025174    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0501 02:57:28.061492    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0501 02:57:28.099323    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0501 02:57:28.133169    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0501 02:57:28.168585    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0501 02:57:28.223450    4712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0501 02:57:28.292690    4712 ssh_runner.go:195] Run: openssl version
	I0501 02:57:28.315882    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 02:57:28.353000    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.365096    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.379858    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 02:57:28.406814    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:57:28.445706    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:57:28.482484    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.491120    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.507367    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:57:28.535421    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:57:28.574647    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 02:57:28.616757    4712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.624484    4712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.642485    4712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 02:57:28.665148    4712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 02:57:28.706630    4712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:57:28.714508    4712 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 02:57:28.714998    4712 kubeadm.go:928] updating node {m03 172.28.216.62 8443 v1.30.0 docker true true} ...
	I0501 02:57:28.715189    4712 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-136200-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.216.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:57:28.715218    4712 kube-vip.go:111] generating kube-vip config ...
	I0501 02:57:28.727524    4712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0501 02:57:28.767475    4712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0501 02:57:28.767631    4712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.223.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0501 02:57:28.783398    4712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.801741    4712 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 02:57:28.815792    4712 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 02:57:28.838101    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.837983    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 02:57:28.838226    4712 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 02:57:28.838396    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.855124    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:57:28.856182    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 02:57:28.858128    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 02:57:28.881905    4712 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.881905    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 02:57:28.882027    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 02:57:28.882165    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 02:57:28.882277    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 02:57:28.898781    4712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 02:57:28.959439    4712 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 02:57:28.959688    4712 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 02:57:30.251192    4712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0501 02:57:30.272192    4712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0501 02:57:30.311119    4712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:57:30.353248    4712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0501 02:57:30.407414    4712 ssh_runner.go:195] Run: grep 172.28.223.254	control-plane.minikube.internal$ /etc/hosts
	I0501 02:57:30.415360    4712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.223.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 02:57:30.454450    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:57:30.696464    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:57:30.737201    4712 host.go:66] Checking if "ha-136200" exists ...
	I0501 02:57:30.801844    4712 start.go:316] joinCluster: &{Name:ha-136200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-136200 Namespace:default APIServerHAVIP:172.28.223.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.217.218 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.213.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:57:30.802126    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 02:57:30.802234    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-136200 ).state
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 02:57:32.961923    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:32.962279    4712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-136200 ).networkadapters[0]).ipaddresses[0]
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stdout =====>] : 172.28.217.218
	
	I0501 02:57:35.600191    4712 main.go:141] libmachine: [stderr =====>] : 
	I0501 02:57:35.601356    4712 sshutil.go:53] new ssh client: &{IP:172.28.217.218 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-136200\id_rsa Username:docker}
	I0501 02:57:35.838006    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0358438s)
	I0501 02:57:35.838006    4712 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:57:35.838006    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443"
	I0501 02:58:21.819619    4712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3455nt.3c342oggoxvi06jc --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-136200-m03 --control-plane --apiserver-advertise-address=172.28.216.62 --apiserver-bind-port=8443": (45.981274s)
	I0501 02:58:21.819619    4712 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 02:58:22.593318    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-136200-m03 minikube.k8s.io/updated_at=2024_05_01T02_58_22_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=ha-136200 minikube.k8s.io/primary=false
	I0501 02:58:22.788566    4712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-136200-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0501 02:58:22.987611    4712 start.go:318] duration metric: took 52.1853822s to joinCluster
	I0501 02:58:22.987895    4712 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.216.62 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 02:58:23.012496    4712 out.go:177] * Verifying Kubernetes components...
	I0501 02:58:22.988142    4712 config.go:182] Loaded profile config "ha-136200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:58:23.031751    4712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:58:23.569395    4712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:58:23.619961    4712 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:58:23.620228    4712 kapi.go:59] client config for ha-136200: &rest.Config{Host:"https://172.28.223.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-136200\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0501 02:58:23.620770    4712 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.223.254:8443 with https://172.28.217.218:8443
	I0501 02:58:23.621670    4712 node_ready.go:35] waiting up to 6m0s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:23.621910    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:23.621910    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:23.621982    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:23.621982    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:23.637444    4712 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0501 02:58:24.133658    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.133658    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.133658    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.133658    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.139465    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:24.622867    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:24.622867    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:24.622867    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:24.622867    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:24.629524    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.129429    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.129429    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.129510    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.129510    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.135754    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:25.633954    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:25.633954    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:25.633954    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:25.633954    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:25.638650    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:25.639656    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:26.123894    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.123894    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.123894    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.123894    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.129103    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:26.629161    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:26.629161    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:26.629161    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:26.629161    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:26.648167    4712 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 02:58:27.136028    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.136028    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.136028    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.136028    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.326021    4712 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I0501 02:58:27.623480    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:27.623600    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:27.623600    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:27.623600    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:27.629035    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:28.136433    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.136433    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.136626    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.136626    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.203923    4712 round_trippers.go:574] Response Status: 200 OK in 67 milliseconds
	I0501 02:58:28.205553    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:28.636021    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:28.636185    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:28.636185    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:28.636185    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:28.646735    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:29.122451    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.122515    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.122515    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.122515    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.140826    4712 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0501 02:58:29.629756    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:29.629756    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:29.629756    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:29.629756    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:29.637588    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:30.132174    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.132269    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.132269    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.132269    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.136921    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:30.632939    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:30.633022    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:30.633022    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:30.633022    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:30.638815    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:30.640044    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:31.133378    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.133378    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.133378    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.133378    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.138754    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:31.633444    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:31.633511    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:31.633511    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:31.633511    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:31.639686    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:32.131317    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.131317    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.131317    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.131317    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.136414    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:32.629649    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:32.629649    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:32.629649    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:32.629649    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:32.634980    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:33.129878    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.129878    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.129878    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.129878    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.155125    4712 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0501 02:58:33.156557    4712 node_ready.go:53] node "ha-136200-m03" has status "Ready":"False"
	I0501 02:58:33.629865    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:33.630060    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:33.630060    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:33.630060    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:33.636368    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:34.128412    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.128412    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.128412    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.128412    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.133022    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:34.629333    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:34.629333    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:34.629333    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:34.629333    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:34.635358    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.129272    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.129376    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.129376    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.129376    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.136662    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.137446    4712 node_ready.go:49] node "ha-136200-m03" has status "Ready":"True"
	I0501 02:58:35.137492    4712 node_ready.go:38] duration metric: took 11.5157372s for node "ha-136200-m03" to be "Ready" ...
	I0501 02:58:35.137492    4712 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:35.137635    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:35.137635    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.137635    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.137635    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.149133    4712 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 02:58:35.158917    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.159445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-2j8mj
	I0501 02:58:35.159565    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.159565    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.159651    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.170650    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:35.172026    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.172026    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.172026    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.172026    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 02:58:35.180770    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.180770    4712 pod_ready.go:81] duration metric: took 21.3241ms for pod "coredns-7db6d8ff4d-2j8mj" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.180770    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rm4gm
	I0501 02:58:35.180770    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.180770    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.180770    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.185805    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:35.187056    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.187056    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.187056    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.187056    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.191361    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.192405    4712 pod_ready.go:92] pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.192405    4712 pod_ready.go:81] duration metric: took 11.6358ms for pod "coredns-7db6d8ff4d-rm4gm" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.192405    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200
	I0501 02:58:35.192405    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.192405    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.192405    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.196117    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.197312    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.197312    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.197389    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.197389    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.201195    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.201924    4712 pod_ready.go:92] pod "etcd-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.201924    4712 pod_ready.go:81] duration metric: took 9.5188ms for pod "etcd-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.201924    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.202054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m02
	I0501 02:58:35.202195    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.202195    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.202195    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.208450    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.209323    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:35.209323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.209323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.209323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.212935    4712 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 02:58:35.214190    4712 pod_ready.go:92] pod "etcd-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.214190    4712 pod_ready.go:81] duration metric: took 12.2652ms for pod "etcd-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.214190    4712 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.330301    4712 request.go:629] Waited for 115.8713ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/etcd-ha-136200-m03
	I0501 02:58:35.330574    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.330574    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.330574    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.338021    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:35.534070    4712 request.go:629] Waited for 194.5208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:35.534353    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.534353    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.534353    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.540932    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:35.541927    4712 pod_ready.go:92] pod "etcd-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.541927    4712 pod_ready.go:81] duration metric: took 327.673ms for pod "etcd-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.541927    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.737879    4712 request.go:629] Waited for 195.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200
	I0501 02:58:35.738683    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.738683    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.738683    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.743861    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:35.940254    4712 request.go:629] Waited for 195.0246ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:35.940254    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:35.940254    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:35.940254    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:35.943091    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:35.949355    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:35.949355    4712 pod_ready.go:81] duration metric: took 407.425ms for pod "kube-apiserver-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:35.949355    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.143537    4712 request.go:629] Waited for 193.9374ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143801    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m02
	I0501 02:58:36.143835    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.143835    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.143835    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.149992    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.331653    4712 request.go:629] Waited for 180.2785ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:36.331653    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.331653    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.331653    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.337290    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:36.338458    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.338521    4712 pod_ready.go:81] duration metric: took 389.1629ms for pod "kube-apiserver-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.338521    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.533514    4712 request.go:629] Waited for 194.8709ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.533967    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200-m03
	I0501 02:58:36.534181    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.534181    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.534181    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.548236    4712 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 02:58:36.737561    4712 request.go:629] Waited for 188.1304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737864    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:36.737942    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.737942    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.738002    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.742410    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:36.743400    4712 pod_ready.go:92] pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:36.743400    4712 pod_ready.go:81] duration metric: took 404.8131ms for pod "kube-apiserver-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.743400    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:36.942223    4712 request.go:629] Waited for 198.605ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200
	I0501 02:58:36.942445    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:36.942445    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:36.942445    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:36.947749    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.131974    4712 request.go:629] Waited for 183.3149ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132232    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:37.132323    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.132323    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.132323    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.137476    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.138446    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.138446    4712 pod_ready.go:81] duration metric: took 395.044ms for pod "kube-controller-manager-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.138446    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.333778    4712 request.go:629] Waited for 195.2258ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m02
	I0501 02:58:37.334044    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.334044    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.334044    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.338704    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:37.538179    4712 request.go:629] Waited for 197.0874ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538437    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:37.538500    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.538500    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.538500    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.544773    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:37.544773    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.544773    4712 pod_ready.go:81] duration metric: took 406.3235ms for pod "kube-controller-manager-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.544773    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.743876    4712 request.go:629] Waited for 199.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-136200-m03
	I0501 02:58:37.744106    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.744106    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.744106    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.749628    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.931954    4712 request.go:629] Waited for 180.0772ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932054    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:37.932132    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:37.932132    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:37.932132    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:37.937302    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:37.937875    4712 pod_ready.go:92] pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:37.937875    4712 pod_ready.go:81] duration metric: took 393.0991ms for pod "kube-controller-manager-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:37.937875    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.134928    4712 request.go:629] Waited for 196.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.134928    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f67k
	I0501 02:58:38.135164    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.135164    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.135164    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.151320    4712 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0501 02:58:38.340422    4712 request.go:629] Waited for 186.7144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:38.340523    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.340523    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.340523    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.344835    4712 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 02:58:38.346933    4712 pod_ready.go:92] pod "kube-proxy-8f67k" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.347124    4712 pod_ready.go:81] duration metric: took 409.2461ms for pod "kube-proxy-8f67k" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.347124    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.529397    4712 request.go:629] Waited for 182.0139ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529683    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9ml9x
	I0501 02:58:38.529776    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.529776    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.529776    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.535530    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.733704    4712 request.go:629] Waited for 197.3369ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:38.733854    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.733854    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.733854    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.739456    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:38.741035    4712 pod_ready.go:92] pod "kube-proxy-9ml9x" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:38.741035    4712 pod_ready.go:81] duration metric: took 393.9082ms for pod "kube-proxy-9ml9x" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.741141    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:38.936294    4712 request.go:629] Waited for 194.9804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zj5jv
	I0501 02:58:38.936492    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:38.936492    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:38.936492    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:38.941904    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.139076    4712 request.go:629] Waited for 195.5675ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.139516    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.139516    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.139590    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.146156    4712 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 02:58:39.146839    4712 pod_ready.go:92] pod "kube-proxy-zj5jv" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.147389    4712 pod_ready.go:81] duration metric: took 406.2452ms for pod "kube-proxy-zj5jv" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.147389    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.331771    4712 request.go:629] Waited for 183.3466ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200
	I0501 02:58:39.331839    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.331839    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.331839    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.338962    4712 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 02:58:39.529638    4712 request.go:629] Waited for 189.8551ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200
	I0501 02:58:39.529880    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.529880    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.529880    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.535423    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.536281    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.536496    4712 pod_ready.go:81] duration metric: took 389.1041ms for pod "kube-scheduler-ha-136200" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.536496    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.733532    4712 request.go:629] Waited for 196.8225ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733532    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m02
	I0501 02:58:39.733755    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.733755    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.733755    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.738768    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.936556    4712 request.go:629] Waited for 196.8464ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m02
	I0501 02:58:39.936957    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:39.936957    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:39.937066    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:39.942275    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:39.942447    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:39.943009    4712 pod_ready.go:81] duration metric: took 406.5101ms for pod "kube-scheduler-ha-136200-m02" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:39.943009    4712 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.137743    4712 request.go:629] Waited for 194.2926ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-136200-m03
	I0501 02:58:40.137974    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.138045    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.138045    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.143795    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.340161    4712 request.go:629] Waited for 194.6485ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes/ha-136200-m03
	I0501 02:58:40.340307    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.340368    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.340368    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.346127    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.347243    4712 pod_ready.go:92] pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace has status "Ready":"True"
	I0501 02:58:40.347243    4712 pod_ready.go:81] duration metric: took 404.2307ms for pod "kube-scheduler-ha-136200-m03" in "kube-system" namespace to be "Ready" ...
	I0501 02:58:40.347243    4712 pod_ready.go:38] duration metric: took 5.2097122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:58:40.347243    4712 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:58:40.361809    4712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:58:40.399669    4712 api_server.go:72] duration metric: took 17.4115847s to wait for apiserver process to appear ...
	I0501 02:58:40.399766    4712 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:58:40.399822    4712 api_server.go:253] Checking apiserver healthz at https://172.28.217.218:8443/healthz ...
	I0501 02:58:40.410080    4712 api_server.go:279] https://172.28.217.218:8443/healthz returned 200:
	ok
	I0501 02:58:40.410375    4712 round_trippers.go:463] GET https://172.28.217.218:8443/version
	I0501 02:58:40.410503    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.410503    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.410503    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.412638    4712 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 02:58:40.413725    4712 api_server.go:141] control plane version: v1.30.0
	I0501 02:58:40.413725    4712 api_server.go:131] duration metric: took 13.9591ms to wait for apiserver health ...
	I0501 02:58:40.413725    4712 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:58:40.543767    4712 request.go:629] Waited for 129.9651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.543975    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.543975    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.543975    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.554206    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.565423    4712 system_pods.go:59] 24 kube-system pods found
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.565423    4712 system_pods.go:61] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.566039    4712 system_pods.go:61] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.566039    4712 system_pods.go:74] duration metric: took 152.3128ms to wait for pod list to return data ...
	I0501 02:58:40.566039    4712 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:58:40.731110    4712 request.go:629] Waited for 164.8435ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/default/serviceaccounts
	I0501 02:58:40.731110    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.731110    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.731110    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.736937    4712 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 02:58:40.737529    4712 default_sa.go:45] found service account: "default"
	I0501 02:58:40.737568    4712 default_sa.go:55] duration metric: took 171.5277ms for default service account to be created ...
	I0501 02:58:40.737568    4712 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:58:40.936328    4712 request.go:629] Waited for 198.4062ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/namespaces/kube-system/pods
	I0501 02:58:40.936390    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:40.936390    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:40.936390    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:40.946796    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:40.961809    4712 system_pods.go:86] 24 kube-system pods found
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-2j8mj" [f945c979-ae51-4c8e-acf9-105adc3c83bc] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "coredns-7db6d8ff4d-rm4gm" [87b284b3-e8e1-452a-8c72-41a8bec62505] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200" [509a726d-e9a1-4922-8e7e-f3d91ddef75f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m02" [8122eb28-1fdf-4ddf-ab30-c29e8bcb83c0] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "etcd-ha-136200-m03" [5f77fdbc-d14d-4d42-9880-fc7e5b2c58b8] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-kb2x4" [6e660648-3dce-469f-a2c2-c99f445ceb20] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-rlfkk" [ae08f4b9-98a8-4faf-ab4a-f04e900375bf] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kindnet-sj2rc" [c0e605a0-1182-4977-a8ba-fabe9617bd3c] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200" [53ea7d41-7132-4c89-9dbd-bedb2267b55f] Running
	I0501 02:58:40.961809    4712 system_pods.go:89] "kube-apiserver-ha-136200-m02" [fc4036e1-5cc9-4f27-8299-97ee4a29e8b4] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-apiserver-ha-136200-m03" [cf2822d7-29da-4727-b4c1-19b593abbce8] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200" [4c988ab2-e056-4a0e-88c9-b62839c84d9f] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m02" [7a617a7e-7413-4f42-bfe2-763b7ace71ca] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-controller-manager-ha-136200-m03" [f72989a2-322b-4b6d-884f-8888b7fb6e36] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-8f67k" [9dedea03-3066-4852-98e2-10190699b2c5] Running
	I0501 02:58:40.962364    4712 system_pods.go:89] "kube-proxy-9ml9x" [c36f2b4f-ad90-4070-adf1-1ac165f86fdd] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-proxy-zj5jv" [1802b341-6ac6-46b0-99a3-db02ae5d8e46] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200" [6be37365-544a-4367-9852-6eaa5b60e6ad] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m02" [b2ae6bb2-989b-4598-99e3-f8494b006f3e] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-scheduler-ha-136200-m03" [79e48699-dd30-47da-8e29-685b9fb437b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200" [f6f631ac-0ba9-413a-8810-8a80e4be81b8] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m02" [598e76fa-0703-40eb-a62c-f3947f06d0e0] Running
	I0501 02:58:40.962434    4712 system_pods.go:89] "kube-vip-ha-136200-m03" [a1bd8449-1900-4366-86a5-49e758a44ebd] Running
	I0501 02:58:40.962497    4712 system_pods.go:89] "storage-provisioner" [ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e] Running
	I0501 02:58:40.962521    4712 system_pods.go:126] duration metric: took 224.9515ms to wait for k8s-apps to be running ...
	I0501 02:58:40.962521    4712 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:58:40.975583    4712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:58:41.007354    4712 system_svc.go:56] duration metric: took 44.7618ms WaitForService to wait for kubelet
	I0501 02:58:41.007354    4712 kubeadm.go:576] duration metric: took 18.0193266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:58:41.007354    4712 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:58:41.140806    4712 request.go:629] Waited for 133.382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140922    4712 round_trippers.go:463] GET https://172.28.217.218:8443/api/v1/nodes
	I0501 02:58:41.140964    4712 round_trippers.go:469] Request Headers:
	I0501 02:58:41.140964    4712 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 02:58:41.141046    4712 round_trippers.go:473]     Accept: application/json, */*
	I0501 02:58:41.151428    4712 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 02:58:41.153995    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154053    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154053    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:58:41.154113    4712 node_conditions.go:123] node cpu capacity is 2
	I0501 02:58:41.154113    4712 node_conditions.go:105] duration metric: took 146.7575ms to run NodePressure ...
	I0501 02:58:41.154113    4712 start.go:240] waiting for startup goroutines ...
	I0501 02:58:41.154113    4712 start.go:254] writing updated cluster config ...
	I0501 02:58:41.168562    4712 ssh_runner.go:195] Run: rm -f paused
	I0501 02:58:41.321592    4712 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:58:41.326673    4712 out.go:177] * Done! kubectl is now configured to use "ha-136200" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 03:16:04 ha-136200 dockerd[1335]: time="2024-05-01T03:16:04.623522832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:16:04 ha-136200 dockerd[1335]: time="2024-05-01T03:16:04.623615432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:16:04 ha-136200 dockerd[1335]: time="2024-05-01T03:16:04.623633032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:16:04 ha-136200 dockerd[1335]: time="2024-05-01T03:16:04.623774233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:17:07 ha-136200 dockerd[1329]: time="2024-05-01T03:17:07.029193572Z" level=info msg="ignoring event" container=c09511b7df64318687d1349ed95e8ea256583377e93dd201d5bd9b578d54ae6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.030842582Z" level=info msg="shim disconnected" id=c09511b7df64318687d1349ed95e8ea256583377e93dd201d5bd9b578d54ae6d namespace=moby
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.031563386Z" level=warning msg="cleaning up after shim disconnected" id=c09511b7df64318687d1349ed95e8ea256583377e93dd201d5bd9b578d54ae6d namespace=moby
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.031673387Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.392348938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.394211549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.394296050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:17:07 ha-136200 dockerd[1335]: time="2024-05-01T03:17:07.394412751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:17:39 ha-136200 dockerd[1329]: time="2024-05-01T03:17:39.581441308Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=8ff4bf0570939e4b798c511cfaed6e008ea485a0b0a00a841387f9e48c77eaf0 spanID=1db2f40b9b89612d traceID=f9513dd2d07be1e9499708e22fea75e8
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.652820635Z" level=info msg="shim disconnected" id=8ff4bf0570939e4b798c511cfaed6e008ea485a0b0a00a841387f9e48c77eaf0 namespace=moby
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.652930636Z" level=warning msg="cleaning up after shim disconnected" id=8ff4bf0570939e4b798c511cfaed6e008ea485a0b0a00a841387f9e48c77eaf0 namespace=moby
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.652944236Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 03:17:39 ha-136200 dockerd[1329]: time="2024-05-01T03:17:39.654304344Z" level=info msg="ignoring event" container=8ff4bf0570939e4b798c511cfaed6e008ea485a0b0a00a841387f9e48c77eaf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.923412652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.923513053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.923529253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:17:39 ha-136200 dockerd[1335]: time="2024-05-01T03:17:39.924265957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:17:45 ha-136200 dockerd[1329]: time="2024-05-01T03:17:45.202858606Z" level=info msg="ignoring event" container=9fd5c4a0cbda89b63d2a390027691706bec218bc5cd7b666d875ba2c542566b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 03:17:45 ha-136200 dockerd[1335]: time="2024-05-01T03:17:45.206706229Z" level=info msg="shim disconnected" id=9fd5c4a0cbda89b63d2a390027691706bec218bc5cd7b666d875ba2c542566b9 namespace=moby
	May 01 03:17:45 ha-136200 dockerd[1335]: time="2024-05-01T03:17:45.206906330Z" level=warning msg="cleaning up after shim disconnected" id=9fd5c4a0cbda89b63d2a390027691706bec218bc5cd7b666d875ba2c542566b9 namespace=moby
	May 01 03:17:45 ha-136200 dockerd[1335]: time="2024-05-01T03:17:45.206925230Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d23b410ade75a       c42f13656d0b2                                                                                         16 seconds ago       Running             kube-apiserver            1                   2455e947d4906       kube-apiserver-ha-136200
	9fd5c4a0cbda8       4950bb10b3f87                                                                                         48 seconds ago       Exited              kindnet-cni               1                   bdd01e6cca1ed       kindnet-sj2rc
	c89147849f8e6       22aaebb38f4a9                                                                                         About a minute ago   Running             kube-vip                  1                   7f28f99b3c8a8       kube-vip-ha-136200
	a62d0486d35de       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       1                   aaa3d1f50041e       storage-provisioner
	bb23816e7b6b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Running             busybox                   0                   c61d49828a30c       busybox-fc5497c4f-6mlkh
	229343dc7dba5       cbb01a7bd410d                                                                                         26 minutes ago       Running             coredns                   0                   54bbf0662d422       coredns-7db6d8ff4d-rm4gm
	247f815bf0531       6e38f40d628db                                                                                         26 minutes ago       Exited              storage-provisioner       0                   aaa3d1f50041e       storage-provisioner
	822aaf8c270e3       cbb01a7bd410d                                                                                         26 minutes ago       Running             coredns                   0                   cadf8314e6ab7       coredns-7db6d8ff4d-2j8mj
	562cd55ab9702       a0bf559e280cf                                                                                         27 minutes ago       Running             kube-proxy                0                   579e0dba427c2       kube-proxy-8f67k
	1c063bfe224cd       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago       Exited              kube-vip                  0                   7f28f99b3c8a8       kube-vip-ha-136200
	b6454ceb34cad       259c8277fcbbc                                                                                         27 minutes ago       Running             kube-scheduler            0                   e6cf1f3e651b3       kube-scheduler-ha-136200
	8ff4bf0570939       c42f13656d0b2                                                                                         27 minutes ago       Exited              kube-apiserver            0                   2455e947d4906       kube-apiserver-ha-136200
	8fa3aa565b366       c7aad43836fa5                                                                                         27 minutes ago       Running             kube-controller-manager   0                   c7e42fd34e247       kube-controller-manager-ha-136200
	8b0d01885db55       3861cfcd7c04c                                                                                         27 minutes ago       Running             etcd                      0                   da46759fd8e15       etcd-ha-136200
	
	
	==> coredns [229343dc7dba] <==
	[INFO] 10.244.1.2:43398 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000089301s
	[INFO] 10.244.1.2:52211 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001122s
	[INFO] 10.244.1.2:35470 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013228661s
	[INFO] 10.244.1.2:40781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174701s
	[INFO] 10.244.1.2:45257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000274201s
	[INFO] 10.244.1.2:36114 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165601s
	[INFO] 10.244.2.2:56600 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000371102s
	[INFO] 10.244.2.2:39742 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000250502s
	[INFO] 10.244.0.4:45933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116901s
	[INFO] 10.244.0.4:53681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082001s
	[INFO] 10.244.2.2:38830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000232701s
	[INFO] 10.244.0.4:51196 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001489507s
	[INFO] 10.244.0.4:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000264301s
	[INFO] 10.244.0.4:43314 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.013461063s
	[INFO] 10.244.1.2:41778 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092301s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=4057&timeout=5m34s&timeoutSeconds=334&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2057425121]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-May-2024 03:17:42.043) (total time: 10040ms):
	Trace[2057425121]: ---"Objects listed" error:Unauthorized 10039ms (03:17:52.082)
	Trace[2057425121]: [10.040052023s] [10.040052023s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=4079": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=4079": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [822aaf8c270e] <==
	[INFO] 10.244.0.4:55974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.012564658s
	[INFO] 10.244.0.4:45253 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139901s
	[INFO] 10.244.0.4:60045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001515s
	[INFO] 10.244.0.4:39879 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175501s
	[INFO] 10.244.0.4:42089 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000310501s
	[INFO] 10.244.1.2:53821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111101s
	[INFO] 10.244.1.2:42651 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116201s
	[INFO] 10.244.2.2:34505 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078s
	[INFO] 10.244.2.2:54873 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001606s
	[INFO] 10.244.0.4:60573 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001105s
	[INFO] 10.244.0.4:37086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000727s
	[INFO] 10.244.1.2:52370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123901s
	[INFO] 10.244.1.2:35190 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000277501s
	[INFO] 10.244.1.2:42611 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158301s
	[INFO] 10.244.1.2:36993 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000280201s
	[INFO] 10.244.2.2:52181 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206701s
	[INFO] 10.244.2.2:37229 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092101s
	[INFO] 10.244.2.2:56027 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001251s
	[INFO] 10.244.0.4:55246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211601s
	[INFO] 10.244.1.2:57784 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000270801s
	[INFO] 10.244.1.2:39482 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001183s
	[INFO] 10.244.1.2:53277 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078801s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=35, ErrCode=NO_ERROR, debug=""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: etcdserver: request timed out
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 02:49] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.218573] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +31.318095] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.121878] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.646066] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.241331] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.276456] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[  +2.872310] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[  +0.245693] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.234055] systemd-fstab-generator[1209]: Ignoring "noauto" option for root device
	[  +0.318386] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[May 1 02:50] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.117675] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.894847] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +6.744854] systemd-fstab-generator[1728]: Ignoring "noauto" option for root device
	[  +0.118239] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.246999] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.464074] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[ +14.473066] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.151247] kauditd_printk_skb: 29 callbacks suppressed
	[May 1 02:54] kauditd_printk_skb: 26 callbacks suppressed
	[May 1 03:02] hrtimer: interrupt took 2691714 ns
	[May 1 03:17] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [8b0d01885db5] <==
	{"level":"warn","ts":"2024-05-01T03:18:06.001759Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:52.23818Z","time spent":"13.76357014s","remote":"127.0.0.1:47750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/leases/kube-system/apiserver-hfj24ss6mwgafbm5nh5e7bi6dm\" "}
	{"level":"warn","ts":"2024-05-01T03:18:06.001998Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.052102Z","time spent":"12.949884973s","remote":"127.0.0.1:47806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":0,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.002246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.668372Z","time spent":"12.333862889s","remote":"127.0.0.1:47672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.0024Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.052264Z","time spent":"12.950125675s","remote":"127.0.0.1:47706","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.003012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.052226Z","time spent":"12.950773578s","remote":"127.0.0.1:47652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.003286Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.052211Z","time spent":"12.95106448s","remote":"127.0.0.1:47948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.003476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.051234Z","time spent":"12.952231487s","remote":"127.0.0.1:47698","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.003686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.630793Z","time spent":"12.372879522s","remote":"127.0.0.1:47688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.003838Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.630793Z","time spent":"12.373035123s","remote":"127.0.0.1:47602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.003991Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.594142Z","time spent":"12.409800943s","remote":"127.0.0.1:47934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:500 "}
	{"level":"info","ts":"2024-05-01T03:18:05.993222Z","caller":"traceutil/trace.go:171","msg":"trace[759143893] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; }","duration":"12.947071757s","start":"2024-05-01T03:17:53.046144Z","end":"2024-05-01T03:18:05.993216Z","steps":["trace[759143893] 'agreement among raft nodes before linearized reading'  (duration: 12.937526499s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:18:06.004334Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.04614Z","time spent":"12.958181423s","remote":"127.0.0.1:47934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"info","ts":"2024-05-01T03:18:05.993421Z","caller":"traceutil/trace.go:171","msg":"trace[514557830] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"12.468094392s","start":"2024-05-01T03:17:53.525303Z","end":"2024-05-01T03:18:05.993398Z","steps":["trace[514557830] 'agreement among raft nodes before linearized reading'  (duration: 12.458458734s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:18:06.004719Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.525288Z","time spent":"12.479420659s","remote":"127.0.0.1:47680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:05.993447Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.046032Z","time spent":"12.947408858s","remote":"127.0.0.1:47922","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:05.993475Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.486805Z","time spent":"12.506665722s","remote":"127.0.0.1:47598","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":0,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"info","ts":"2024-05-01T03:18:05.993519Z","caller":"traceutil/trace.go:171","msg":"trace[1114486648] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; }","duration":"12.947489658s","start":"2024-05-01T03:17:53.046025Z","end":"2024-05-01T03:18:05.993515Z","steps":["trace[1114486648] 'agreement among raft nodes before linearized reading'  (duration: 12.9377599s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:18:06.005645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.046013Z","time spent":"12.959619432s","remote":"127.0.0.1:47898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":0,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:05.994481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.028647Z","time spent":"12.965829668s","remote":"127.0.0.1:47658","response type":"/etcdserverpb.KV/Range","request count":0,"request size":65,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" limit:10000 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.01543Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.578786Z","time spent":"12.436624303s","remote":"127.0.0.1:47796","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.015774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.531038Z","time spent":"12.484724792s","remote":"127.0.0.1:47784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.016042Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.531012Z","time spent":"12.485019593s","remote":"127.0.0.1:47834","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.016923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:17:53.053807Z","time spent":"12.963101952s","remote":"127.0.0.1:48044","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	{"level":"warn","ts":"2024-05-01T03:18:06.035174Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"477eb305d8136a0f","rtt":"2.490307ms","error":"dial tcp 172.28.216.62:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-01T03:18:06.035729Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"477eb305d8136a0f","rtt":"35.38366ms","error":"dial tcp 172.28.216.62:2380: connect: no route to host"}
	
	
	==> kernel <==
	 03:18:06 up 29 min,  0 users,  load average: 1.11, 0.84, 0.57
	Linux ha-136200 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9fd5c4a0cbda] <==
	I0501 03:17:07.875811       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 03:17:07.876008       1 main.go:107] hostIP = 172.28.217.218
	podIP = 172.28.217.218
	I0501 03:17:07.876536       1 main.go:116] setting mtu 1500 for CNI 
	I0501 03:17:07.876577       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 03:17:07.876602       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 03:17:16.981738       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0501 03:17:31.017180       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0501 03:17:35.031944       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0501 03:17:38.103845       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0501 03:17:41.176130       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [8ff4bf057093] <==
	Trace[440993113]: ---"Objects listed" error:etcdserver: request timed out 12992ms (03:17:31.014)
	Trace[440993113]: [12.992990174s] [12.992990174s] END
	E0501 03:17:31.014130       1 cacher.go:475] cacher (validatingadmissionpolicybindings.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingAdmissionPolicyBinding: etcdserver: request timed out; reinitializing...
	E0501 03:17:31.010074       1 controller.go:131] Unable to remove endpoints from kubernetes service: etcdserver: request timed out
	E0501 03:17:35.993688       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c51e730)}: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	I0501 03:17:35.993909       1 trace.go:236] Trace[1903462767]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:1d8060e8-b3d3-4bb6-b4db-a2f0b36755c5,client:172.28.217.218,api-group:,api-version:v1,name:ha-136200-m03,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-136200-m03/status,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:node-controller,verb:PUT (01-May-2024 03:17:28.987) (total time: 7006ms):
	Trace[1903462767]: ["GuaranteedUpdate etcd3" audit-id:1d8060e8-b3d3-4bb6-b4db-a2f0b36755c5,key:/minions/ha-136200-m03,type:*core.Node,resource:nodes 7006ms (03:17:28.987)
	Trace[1903462767]:  ---"Txn call failed" err:rpc error: code = DeadlineExceeded desc = context deadline exceeded 7003ms (03:17:35.993)]
	Trace[1903462767]: [7.006392551s] [7.006392551s] END
	E0501 03:17:35.994421       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{s:(*status.Status)(0xc00c3e7a40)}: rpc error: code = DeadlineExceeded desc = context deadline exceeded
	I0501 03:17:35.996258       1 trace.go:236] Trace[1258548346]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b894c590-554d-4591-808f-2e50b94e06a7,client:172.28.217.218,api-group:,api-version:v1,name:ha-136200,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-136200/status,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:node-controller,verb:PUT (01-May-2024 03:17:28.987) (total time: 7008ms):
	Trace[1258548346]: ["GuaranteedUpdate etcd3" audit-id:b894c590-554d-4591-808f-2e50b94e06a7,key:/minions/ha-136200,type:*core.Node,resource:nodes 7008ms (03:17:28.987)
	Trace[1258548346]:  ---"Txn call failed" err:rpc error: code = DeadlineExceeded desc = context deadline exceeded 7003ms (03:17:35.994)]
	Trace[1258548346]: [7.008835166s] [7.008835166s] END
	E0501 03:17:37.981443       1 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:"etcdserver: request timed out"}: etcdserver: request timed out
	E0501 03:17:37.981864       1 repair.go:85] unable to refresh the port allocations: etcdserver: request timed out
	I0501 03:17:37.982631       1 trace.go:236] Trace[1682319260]: "Get" accept:application/json, */*,audit-id:56111ac9-62f0-49d3-9c5a-f746982557e9,client:172.28.217.218,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (01-May-2024 03:17:27.634) (total time: 10348ms):
	Trace[1682319260]: [10.348176613s] [10.348176613s] END
	E0501 03:17:37.983130       1 repair.go:127] unable to refresh the service IP block: etcdserver: request timed out
	I0501 03:17:37.983423       1 controller.go:157] Shutting down quota evaluator
	I0501 03:17:37.983469       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:17:37.983485       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:17:37.983576       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:17:37.983660       1 controller.go:176] quota evaluator worker shutdown
	I0501 03:17:37.984078       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [d23b410ade75] <==
	Trace[1409823301]: [13.055089403s] [13.055089403s] END
	E0501 03:18:06.077104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: failed to list *v1.Role: etcdserver: request timed out
	W0501 03:18:06.073513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: etcdserver: request timed out
	I0501 03:18:06.077490       1 trace.go:236] Trace[935516958]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-May-2024 03:17:53.665) (total time: 12412ms):
	Trace[935516958]: ---"Objects listed" error:etcdserver: request timed out 12408ms (03:18:06.073)
	Trace[935516958]: [12.412047157s] [12.412047157s] END
	E0501 03:18:06.077527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: etcdserver: request timed out
	I0501 03:18:06.074100       1 trace.go:236] Trace[169790656]: "Reflector ListAndWatch" name:storage/cacher.go:/deployments (01-May-2024 03:17:53.067) (total time: 13006ms):
	Trace[169790656]: ---"Objects listed" error:etcdserver: request timed out 12970ms (03:18:06.038)
	Trace[169790656]: [13.006606713s] [13.006606713s] END
	E0501 03:18:06.077770       1 cacher.go:475] cacher (deployments.apps): unexpected ListAndWatch error: failed to list *apps.Deployment: etcdserver: request timed out; reinitializing...
	I0501 03:18:06.074083       1 trace.go:236] Trace[1466566722]: "Reflector ListAndWatch" name:storage/cacher.go:/roles (01-May-2024 03:17:53.016) (total time: 13057ms):
	Trace[1466566722]: ---"Objects listed" error:etcdserver: request timed out 13021ms (03:18:06.038)
	Trace[1466566722]: [13.057321016s] [13.057321016s] END
	E0501 03:18:06.077790       1 cacher.go:475] cacher (roles.rbac.authorization.k8s.io): unexpected ListAndWatch error: failed to list *rbac.Role: etcdserver: request timed out; reinitializing...
	W0501 03:18:06.078170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: etcdserver: request timed out
	W0501 03:18:06.078204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out
	I0501 03:18:06.078676       1 trace.go:236] Trace[881175502]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-May-2024 03:17:53.522) (total time: 12556ms):
	Trace[881175502]: ---"Objects listed" error:etcdserver: request timed out 12556ms (03:18:06.078)
	Trace[881175502]: [12.556306019s] [12.556306019s] END
	E0501 03:18:06.078726       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: etcdserver: request timed out
	I0501 03:18:06.078759       1 trace.go:236] Trace[638043519]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-May-2024 03:17:53.050) (total time: 13027ms):
	Trace[638043519]: ---"Objects listed" error:etcdserver: request timed out 13027ms (03:18:06.078)
	Trace[638043519]: [13.027736139s] [13.027736139s] END
	E0501 03:18:06.078828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: failed to list *v1.ValidatingAdmissionPolicy: etcdserver: request timed out
	
	
	==> kube-controller-manager [8fa3aa565b36] <==
	W0501 03:18:03.165850       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: apiservices.apiregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "apiservices" in API group "apiregistration.k8s.io" at the cluster scope
	E0501 03:18:03.166130       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: apiservices.apiregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "apiservices" in API group "apiregistration.k8s.io" at the cluster scope
	W0501 03:18:03.312813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingAdmissionPolicy: validatingadmissionpolicies.admissionregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "validatingadmissionpolicies" in API group "admissionregistration.k8s.io" at the cluster scope
	E0501 03:18:03.312885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicy: failed to list *v1.ValidatingAdmissionPolicy: validatingadmissionpolicies.admissionregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "validatingadmissionpolicies" in API group "admissionregistration.k8s.io" at the cluster scope
	W0501 03:18:03.355171       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
	E0501 03:18:03.355310       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
	W0501 03:18:03.505349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-controller-manager" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:18:03.505434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-controller-manager" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:18:03.537405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:18:03.537519       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:18:03.923187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.DaemonSet: daemonsets.apps is forbidden: User "system:kube-controller-manager" cannot list resource "daemonsets" in API group "apps" at the cluster scope
	E0501 03:18:03.923254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.DaemonSet: failed to list *v1.DaemonSet: daemonsets.apps is forbidden: User "system:kube-controller-manager" cannot list resource "daemonsets" in API group "apps" at the cluster scope
	W0501 03:18:04.003702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-controller-manager" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:18:04.003860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-controller-manager" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:18:04.089208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Deployment: deployments.apps is forbidden: User "system:kube-controller-manager" cannot list resource "deployments" in API group "apps" at the cluster scope
	E0501 03:18:04.089267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Deployment: failed to list *v1.Deployment: deployments.apps is forbidden: User "system:kube-controller-manager" cannot list resource "deployments" in API group "apps" at the cluster scope
	W0501 03:18:04.111380       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0501 03:18:04.584140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
	E0501 03:18:04.584217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: validatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "validatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
	W0501 03:18:04.612665       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0501 03:18:04.795840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-controller-manager" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:18:04.795885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-controller-manager" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:18:05.616103       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0501 03:18:05.621082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0501 03:18:05.621209       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:kube-controller-manager" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-proxy [562cd55ab970] <==
	E0501 03:16:49.407677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:16:52.473786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:16:52.473901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:16:52.474217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:16:52.474315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:16:52.474601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:16:52.474803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:16:58.617400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:16:58.617711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:16:58.618330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:16:58.618422       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:16:58.618821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:16:58.619261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:17:07.832121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:17:07.833314       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:17:10.904361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:17:10.904625       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:17:13.976924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:17:13.978269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:17:23.192126       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:17:23.192219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4057": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:17:26.264695       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:17:26.268038       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=4025": dial tcp 172.28.223.254:8443: connect: no route to host
	W0501 03:17:32.408317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	E0501 03:17:32.408537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-136200&resourceVersion=4079": dial tcp 172.28.223.254:8443: connect: no route to host
	
	
	==> kube-scheduler [b6454ceb34ca] <==
	E0501 03:18:00.120696       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:18:00.147649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:18:00.147871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:18:00.304596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:18:00.304762       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 03:18:00.533360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:18:00.533542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:18:00.828932       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:18:00.829121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0501 03:18:00.949338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:18:00.949518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:18:01.214218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:18:01.214333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:18:02.813100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:18:02.813296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:18:03.362627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0501 03:18:03.362795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0501 03:18:03.436388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:18:03.436431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:18:03.954197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:18:03.954241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:18:04.002881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 03:18:04.003275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 03:18:04.925299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0501 03:18:04.925495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	
	==> kubelet <==
	May 01 03:17:50 ha-136200 kubelet[2230]: E0501 03:17:50.839891    2230 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-136200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-136200?resourceVersion=0&timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:17:50 ha-136200 kubelet[2230]: I0501 03:17:50.839897    2230 status_manager.go:853] "Failed to get status for pod" podUID="ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:17:53 ha-136200 kubelet[2230]: E0501 03:17:53.911685    2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-136200?timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host" interval="7s"
	May 01 03:17:53 ha-136200 kubelet[2230]: E0501 03:17:53.911684    2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 172.28.223.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-136200.17cb3f0121822433  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-136200,UID:7c76d1401e4a0fd23061e265f50de86b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-136200,},FirstTimestamp:2024-05-01 03:15:57.234299955 +0000 UTC m=+1528.186018437,LastTimestamp:2024-05-01 03:15:57.234299955 +0000 UTC m=+1528.186018437,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Rela
ted:nil,ReportingController:kubelet,ReportingInstance:ha-136200,}"
	May 01 03:17:53 ha-136200 kubelet[2230]: E0501 03:17:53.911805    2230 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-136200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-136200?timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:17:53 ha-136200 kubelet[2230]: I0501 03:17:53.911886    2230 status_manager.go:853] "Failed to get status for pod" podUID="7c76d1401e4a0fd23061e265f50de86b" pod="kube-system/kube-apiserver-ha-136200" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:17:56 ha-136200 kubelet[2230]: I0501 03:17:56.983937    2230 status_manager.go:853] "Failed to get status for pod" podUID="55203dc027be5684c0a3d10abb880afb" pod="kube-system/kube-vip-ha-136200" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-136200\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:17:56 ha-136200 kubelet[2230]: E0501 03:17:56.984085    2230 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-136200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-136200?timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:17:58 ha-136200 kubelet[2230]: I0501 03:17:58.238134    2230 scope.go:117] "RemoveContainer" containerID="9fd5c4a0cbda89b63d2a390027691706bec218bc5cd7b666d875ba2c542566b9"
	May 01 03:18:00 ha-136200 kubelet[2230]: I0501 03:18:00.055748    2230 status_manager.go:853] "Failed to get status for pod" podUID="c0e605a0-1182-4977-a8ba-fabe9617bd3c" pod="kube-system/kindnet-sj2rc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-sj2rc\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:18:00 ha-136200 kubelet[2230]: E0501 03:18:00.055929    2230 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-136200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-136200?timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:18:00 ha-136200 kubelet[2230]: W0501 03:18:00.055749    2230 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=4019": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:00 ha-136200 kubelet[2230]: E0501 03:18:00.056493    2230 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=4019": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:03 ha-136200 kubelet[2230]: E0501 03:18:03.127772    2230 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-136200?timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host" interval="7s"
	May 01 03:18:03 ha-136200 kubelet[2230]: W0501 03:18:03.127775    2230 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=4078": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:03 ha-136200 kubelet[2230]: E0501 03:18:03.127929    2230 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?resourceVersion=4078": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:03 ha-136200 kubelet[2230]: E0501 03:18:03.128150    2230 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-136200\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-136200?timeout=10s\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:18:03 ha-136200 kubelet[2230]: E0501 03:18:03.128189    2230 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 03:18:03 ha-136200 kubelet[2230]: I0501 03:18:03.128303    2230 status_manager.go:853] "Failed to get status for pod" podUID="ca2bdb41-e7f0-4aa4-b343-0e3a85a4c04e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:18:03 ha-136200 kubelet[2230]: W0501 03:18:03.128690    2230 reflector.go:547] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)ha-136200&resourceVersion=3999": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:03 ha-136200 kubelet[2230]: E0501 03:18:03.128854    2230 reflector.go:150] pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)ha-136200&resourceVersion=3999": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:06 ha-136200 kubelet[2230]: I0501 03:18:06.200399    2230 status_manager.go:853] "Failed to get status for pod" podUID="7c76d1401e4a0fd23061e265f50de86b" pod="kube-system/kube-apiserver-ha-136200" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-136200\": dial tcp 172.28.223.254:8443: connect: no route to host"
	May 01 03:18:06 ha-136200 kubelet[2230]: W0501 03:18:06.200524    2230 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=4054": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:06 ha-136200 kubelet[2230]: E0501 03:18:06.200691    2230 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=4054": dial tcp 172.28.223.254:8443: connect: no route to host
	May 01 03:18:06 ha-136200 kubelet[2230]: E0501 03:18:06.201076    2230 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 172.28.223.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-136200.17cb3f0121822433  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-136200,UID:7c76d1401e4a0fd23061e265f50de86b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-136200,},FirstTimestamp:2024-05-01 03:15:57.234299955 +0000 UTC m=+1528.186018437,LastTimestamp:2024-05-01 03:15:57.234299955 +0000 UTC m=+1528.186018437,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Rela
ted:nil,ReportingController:kubelet,ReportingInstance:ha-136200,}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:17:47.231286    6232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-136200 -n ha-136200: exit status 2 (28.1335895s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:18:08.635398   12468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-136200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (223.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- sh -c "ping -c 1 172.28.208.1"
E0501 03:56:34.995900   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- sh -c "ping -c 1 172.28.208.1": exit status 1 (10.5376781s)

                                                
                                                
-- stdout --
	PING 172.28.208.1 (172.28.208.1): 56 data bytes
	
	--- 172.28.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:56:26.764040    2496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.208.1) from pod (busybox-fc5497c4f-cc6mk): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-tbxxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-tbxxx -- sh -c "ping -c 1 172.28.208.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-tbxxx -- sh -c "ping -c 1 172.28.208.1": exit status 1 (10.536834s)

                                                
                                                
-- stdout --
	PING 172.28.208.1 (172.28.208.1): 56 data bytes
	
	--- 172.28.208.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:56:37.836205    6628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.208.1) from pod (busybox-fc5497c4f-tbxxx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-289800 -n multinode-289800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-289800 -n multinode-289800: (12.1025381s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 logs -n 25: (8.6144399s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-694500 ssh -- ls                    | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:45 UTC | 01 May 24 03:45 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-694500                           | mount-start-1-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:45 UTC | 01 May 24 03:45 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-694500 ssh -- ls                    | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:45 UTC | 01 May 24 03:46 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-694500                           | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:46 UTC | 01 May 24 03:46 UTC |
	| start   | -p mount-start-2-694500                           | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:46 UTC | 01 May 24 03:48 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:48 UTC |                     |
	|         | --profile mount-start-2-694500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-694500 ssh -- ls                    | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:48 UTC | 01 May 24 03:48 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-694500                           | mount-start-2-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:48 UTC | 01 May 24 03:49 UTC |
	| delete  | -p mount-start-1-694500                           | mount-start-1-694500 | minikube6\jenkins | v1.33.0 | 01 May 24 03:49 UTC | 01 May 24 03:49 UTC |
	| start   | -p multinode-289800                               | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:49 UTC | 01 May 24 03:55 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- apply -f                   | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- rollout                    | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- get pods -o                | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- get pods -o                | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-cc6mk --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-tbxxx --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-cc6mk --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-tbxxx --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-cc6mk -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-tbxxx -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- get pods -o                | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-cc6mk                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC |                     |
	|         | busybox-fc5497c4f-cc6mk -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC | 01 May 24 03:56 UTC |
	|         | busybox-fc5497c4f-tbxxx                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-289800 -- exec                       | multinode-289800     | minikube6\jenkins | v1.33.0 | 01 May 24 03:56 UTC |                     |
	|         | busybox-fc5497c4f-tbxxx -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.208.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 03:49:07
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 03:49:07.904233   13472 out.go:291] Setting OutFile to fd 740 ...
	I0501 03:49:07.904858   13472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:49:07.904858   13472 out.go:304] Setting ErrFile to fd 932...
	I0501 03:49:07.905387   13472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:49:07.928521   13472 out.go:298] Setting JSON to false
	I0501 03:49:07.933617   13472 start.go:129] hostinfo: {"hostname":"minikube6","uptime":108402,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 03:49:07.933712   13472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 03:49:07.940169   13472 out.go:177] * [multinode-289800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 03:49:07.943546   13472 notify.go:220] Checking for updates...
	I0501 03:49:07.945609   13472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 03:49:07.947194   13472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:49:07.950254   13472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 03:49:07.952946   13472 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:49:07.955328   13472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:49:07.964209   13472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:49:13.404519   13472 out.go:177] * Using the hyperv driver based on user configuration
	I0501 03:49:13.411344   13472 start.go:297] selected driver: hyperv
	I0501 03:49:13.411344   13472 start.go:901] validating driver "hyperv" against <nil>
	I0501 03:49:13.411344   13472 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:49:13.464608   13472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 03:49:13.465996   13472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:49:13.465996   13472 cni.go:84] Creating CNI manager for ""
	I0501 03:49:13.465996   13472 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0501 03:49:13.465996   13472 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0501 03:49:13.465996   13472 start.go:340] cluster config:
	{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:49:13.466823   13472 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 03:49:13.474169   13472 out.go:177] * Starting "multinode-289800" primary control-plane node in "multinode-289800" cluster
	I0501 03:49:13.477572   13472 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 03:49:13.477793   13472 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 03:49:13.477899   13472 cache.go:56] Caching tarball of preloaded images
	I0501 03:49:13.477899   13472 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 03:49:13.478424   13472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 03:49:13.479026   13472 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 03:49:13.479180   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json: {Name:mkdea9aed40a114bf7f6a8009e04aab8bbc8acd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:49:13.480493   13472 start.go:360] acquireMachinesLock for multinode-289800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:49:13.480680   13472 start.go:364] duration metric: took 187.3µs to acquireMachinesLock for "multinode-289800"
	I0501 03:49:13.480766   13472 start.go:93] Provisioning new machine with config: &{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 03:49:13.480766   13472 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 03:49:13.484492   13472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:49:13.484492   13472 start.go:159] libmachine.API.Create for "multinode-289800" (driver="hyperv")
	I0501 03:49:13.484492   13472 client.go:168] LocalClient.Create starting
	I0501 03:49:13.485515   13472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 03:49:13.485839   13472 main.go:141] libmachine: Decoding PEM data...
	I0501 03:49:13.485839   13472 main.go:141] libmachine: Parsing certificate...
	I0501 03:49:13.485839   13472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 03:49:13.486534   13472 main.go:141] libmachine: Decoding PEM data...
	I0501 03:49:13.486534   13472 main.go:141] libmachine: Parsing certificate...
	I0501 03:49:13.486534   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 03:49:15.619111   13472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 03:49:15.619111   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:15.619111   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 03:49:17.451962   13472 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 03:49:17.451962   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:17.451962   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 03:49:19.025460   13472 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 03:49:19.025517   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:19.025517   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 03:49:22.633641   13472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 03:49:22.633641   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:22.636568   13472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:49:23.210073   13472 main.go:141] libmachine: Creating SSH key...
	I0501 03:49:23.279336   13472 main.go:141] libmachine: Creating VM...
	I0501 03:49:23.279336   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 03:49:26.147452   13472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 03:49:26.147452   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:26.147452   13472 main.go:141] libmachine: Using switch "Default Switch"
	I0501 03:49:26.148290   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 03:49:27.942748   13472 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 03:49:28.075401   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:28.076721   13472 main.go:141] libmachine: Creating VHD
	I0501 03:49:28.076838   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 03:49:31.729418   13472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AB26DB29-EF2D-43E0-9A80-A36BC0AED88F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 03:49:31.729560   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:31.729560   13472 main.go:141] libmachine: Writing magic tar header
	I0501 03:49:31.729682   13472 main.go:141] libmachine: Writing SSH key tar header
	I0501 03:49:31.739878   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 03:49:34.924338   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:34.924642   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:34.924642   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\disk.vhd' -SizeBytes 20000MB
	I0501 03:49:37.456220   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:37.456220   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:37.456220   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-289800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 03:49:41.123035   13472 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-289800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 03:49:41.123035   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:41.123793   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-289800 -DynamicMemoryEnabled $false
	I0501 03:49:43.364994   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:43.365047   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:43.365047   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-289800 -Count 2
	I0501 03:49:45.499295   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:45.499295   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:45.499295   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-289800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\boot2docker.iso'
	I0501 03:49:48.125677   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:48.125677   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:48.125677   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-289800 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\disk.vhd'
	I0501 03:49:50.846583   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:50.847582   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:50.847582   13472 main.go:141] libmachine: Starting VM...
	I0501 03:49:50.847746   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-289800
	I0501 03:49:53.976114   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:53.976114   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:53.976114   13472 main.go:141] libmachine: Waiting for host to start...
	I0501 03:49:53.976114   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:49:56.219122   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:49:56.219168   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:56.219278   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:49:58.685644   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:49:58.685644   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:49:59.694820   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:01.845988   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:01.846813   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:01.846813   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:04.376989   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:50:04.376989   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:05.386159   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:07.596629   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:07.596629   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:07.596629   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:10.123077   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:50:10.123077   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:11.126513   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:13.325926   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:13.326018   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:13.326018   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:15.824318   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:50:15.824353   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:16.828333   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:19.015742   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:19.016069   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:19.016265   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:21.648718   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:21.648718   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:21.648718   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:23.742586   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:23.742586   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:23.742586   13472 machine.go:94] provisionDockerMachine start ...
	I0501 03:50:23.743977   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:25.902357   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:25.902357   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:25.902357   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:28.417304   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:28.417922   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:28.423735   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:50:28.436946   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:50:28.436946   13472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:50:28.579287   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:50:28.579287   13472 buildroot.go:166] provisioning hostname "multinode-289800"
	I0501 03:50:28.579287   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:30.664525   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:30.664525   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:30.664525   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:33.241541   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:33.241541   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:33.248588   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:50:33.248933   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:50:33.248933   13472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-289800 && echo "multinode-289800" | sudo tee /etc/hostname
	I0501 03:50:33.416846   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-289800
	
	I0501 03:50:33.416846   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:35.489121   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:35.489121   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:35.489121   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:38.034122   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:38.034122   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:38.040199   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:50:38.040795   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:50:38.040795   13472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-289800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-289800/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-289800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:50:38.194598   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:50:38.194598   13472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 03:50:38.194598   13472 buildroot.go:174] setting up certificates
	I0501 03:50:38.194598   13472 provision.go:84] configureAuth start
	I0501 03:50:38.194598   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:40.313442   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:40.314170   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:40.314251   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:42.844693   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:42.844944   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:42.844944   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:44.918012   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:44.918012   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:44.918711   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:47.449985   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:47.449985   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:47.449985   13472 provision.go:143] copyHostCerts
	I0501 03:50:47.450233   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 03:50:47.450307   13472 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 03:50:47.450307   13472 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 03:50:47.450902   13472 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 03:50:47.451716   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 03:50:47.452344   13472 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 03:50:47.452437   13472 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 03:50:47.452437   13472 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 03:50:47.453137   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 03:50:47.453771   13472 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 03:50:47.453771   13472 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 03:50:47.453771   13472 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 03:50:47.455084   13472 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-289800 san=[127.0.0.1 172.28.209.152 localhost minikube multinode-289800]
	I0501 03:50:48.095666   13472 provision.go:177] copyRemoteCerts
	I0501 03:50:48.109929   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:50:48.109929   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:50.257772   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:50.258268   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:50.258322   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:52.863512   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:52.864300   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:52.864370   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:50:52.988063   13472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.877107s)
	I0501 03:50:52.988123   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 03:50:52.988703   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:50:53.043668   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 03:50:53.044164   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0501 03:50:53.093562   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 03:50:53.093562   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:50:53.151156   13472 provision.go:87] duration metric: took 14.9563442s to configureAuth
	I0501 03:50:53.151156   13472 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:50:53.151817   13472 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:50:53.151921   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:50:55.249989   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:50:55.250087   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:55.250413   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:50:57.798521   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:50:57.798521   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:50:57.804262   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:50:57.805114   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:50:57.805114   13472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 03:50:57.937927   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 03:50:57.937927   13472 buildroot.go:70] root file system type: tmpfs
	I0501 03:50:57.937927   13472 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 03:50:57.937927   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:00.040574   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:00.040574   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:00.040574   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:02.607074   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:02.607074   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:02.616951   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:51:02.618261   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:51:02.618261   13472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 03:51:02.795221   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 03:51:02.795391   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:04.905188   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:04.905188   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:04.906176   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:07.459748   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:07.460793   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:07.466708   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:51:07.467010   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:51:07.467010   13472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 03:51:09.682710   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 03:51:09.682811   13472 machine.go:97] duration metric: took 45.9397787s to provisionDockerMachine
	I0501 03:51:09.682811   13472 client.go:171] duration metric: took 1m56.1974521s to LocalClient.Create
	I0501 03:51:09.682811   13472 start.go:167] duration metric: took 1m56.1974521s to libmachine.API.Create "multinode-289800"
	I0501 03:51:09.682811   13472 start.go:293] postStartSetup for "multinode-289800" (driver="hyperv")
	I0501 03:51:09.682943   13472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:51:09.698474   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:51:09.699472   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:11.817214   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:11.817839   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:11.817839   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:14.387489   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:14.388426   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:14.389108   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:51:14.501142   13472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8010659s)
	I0501 03:51:14.515318   13472 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:51:14.523133   13472 command_runner.go:130] > NAME=Buildroot
	I0501 03:51:14.523133   13472 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 03:51:14.523381   13472 command_runner.go:130] > ID=buildroot
	I0501 03:51:14.523381   13472 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 03:51:14.523381   13472 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 03:51:14.523618   13472 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:51:14.523618   13472 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 03:51:14.524140   13472 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 03:51:14.524912   13472 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 03:51:14.524912   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 03:51:14.538509   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:51:14.558631   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 03:51:14.621581   13472 start.go:296] duration metric: took 4.9387332s for postStartSetup
	I0501 03:51:14.624910   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:16.728888   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:16.728957   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:16.728957   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:19.319760   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:19.320632   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:19.320698   13472 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 03:51:19.323586   13472 start.go:128] duration metric: took 2m5.8418805s to createHost
	I0501 03:51:19.323586   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:21.456507   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:21.456507   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:21.457093   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:24.013564   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:24.013564   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:24.019941   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:51:24.020525   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:51:24.020525   13472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:51:24.156983   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714535484.154161468
	
	I0501 03:51:24.156983   13472 fix.go:216] guest clock: 1714535484.154161468
	I0501 03:51:24.156983   13472 fix.go:229] Guest: 2024-05-01 03:51:24.154161468 +0000 UTC Remote: 2024-05-01 03:51:19.3235861 +0000 UTC m=+131.613740801 (delta=4.830575368s)
	I0501 03:51:24.156983   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:26.235443   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:26.235443   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:26.235745   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:28.738596   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:28.738659   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:28.744504   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:51:28.745170   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.152 22 <nil> <nil>}
	I0501 03:51:28.745228   13472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714535484
	I0501 03:51:28.905468   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 03:51:24 UTC 2024
	
	I0501 03:51:28.905468   13472 fix.go:236] clock set: Wed May  1 03:51:24 UTC 2024
	 (err=<nil>)
	I0501 03:51:28.905468   13472 start.go:83] releasing machines lock for "multinode-289800", held for 2m15.4237769s
	I0501 03:51:28.905468   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:30.975916   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:30.975916   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:30.976007   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:33.516990   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:33.517205   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:33.521126   13472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:51:33.521190   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:33.533046   13472 ssh_runner.go:195] Run: cat /version.json
	I0501 03:51:33.533046   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:51:35.683846   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:35.683938   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:35.684110   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:35.730209   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:51:35.730209   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:35.730209   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:51:38.298144   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:38.298210   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:38.298265   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:51:38.324521   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:51:38.324521   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:51:38.325101   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:51:38.513486   13472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 03:51:38.514462   13472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9932339s)
	I0501 03:51:38.514655   13472 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0501 03:51:38.514655   13472 ssh_runner.go:235] Completed: cat /version.json: (4.9815718s)
	I0501 03:51:38.529245   13472 ssh_runner.go:195] Run: systemctl --version
	I0501 03:51:38.539249   13472 command_runner.go:130] > systemd 252 (252)
	I0501 03:51:38.539249   13472 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0501 03:51:38.556042   13472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 03:51:38.564944   13472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0501 03:51:38.565532   13472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:51:38.580402   13472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:51:38.609433   13472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0501 03:51:38.609804   13472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:51:38.609908   13472 start.go:494] detecting cgroup driver to use...
	I0501 03:51:38.610173   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:51:38.643972   13472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 03:51:38.657537   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 03:51:38.693029   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 03:51:38.713631   13472 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 03:51:38.728659   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 03:51:38.761811   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:51:38.795491   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 03:51:38.831889   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:51:38.866496   13472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:51:38.901928   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 03:51:38.934887   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 03:51:38.966936   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 03:51:39.000456   13472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:51:39.020352   13472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 03:51:39.036753   13472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:51:39.076706   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:51:39.302617   13472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 03:51:39.339402   13472 start.go:494] detecting cgroup driver to use...
	I0501 03:51:39.355916   13472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 03:51:39.382283   13472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 03:51:39.382283   13472 command_runner.go:130] > [Unit]
	I0501 03:51:39.382283   13472 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 03:51:39.382283   13472 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 03:51:39.382698   13472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 03:51:39.382698   13472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 03:51:39.382698   13472 command_runner.go:130] > StartLimitBurst=3
	I0501 03:51:39.382698   13472 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 03:51:39.382698   13472 command_runner.go:130] > [Service]
	I0501 03:51:39.382800   13472 command_runner.go:130] > Type=notify
	I0501 03:51:39.382800   13472 command_runner.go:130] > Restart=on-failure
	I0501 03:51:39.382800   13472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 03:51:39.382800   13472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 03:51:39.382873   13472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 03:51:39.382873   13472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 03:51:39.382873   13472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 03:51:39.382873   13472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 03:51:39.382873   13472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 03:51:39.382941   13472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 03:51:39.383003   13472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 03:51:39.383029   13472 command_runner.go:130] > ExecStart=
	I0501 03:51:39.383054   13472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 03:51:39.383054   13472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 03:51:39.383128   13472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 03:51:39.383171   13472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 03:51:39.383189   13472 command_runner.go:130] > LimitNOFILE=infinity
	I0501 03:51:39.383218   13472 command_runner.go:130] > LimitNPROC=infinity
	I0501 03:51:39.383218   13472 command_runner.go:130] > LimitCORE=infinity
	I0501 03:51:39.383218   13472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 03:51:39.383218   13472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 03:51:39.383218   13472 command_runner.go:130] > TasksMax=infinity
	I0501 03:51:39.383218   13472 command_runner.go:130] > TimeoutStartSec=0
	I0501 03:51:39.383292   13472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 03:51:39.383292   13472 command_runner.go:130] > Delegate=yes
	I0501 03:51:39.383316   13472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 03:51:39.383316   13472 command_runner.go:130] > KillMode=process
	I0501 03:51:39.383316   13472 command_runner.go:130] > [Install]
	I0501 03:51:39.383316   13472 command_runner.go:130] > WantedBy=multi-user.target
	I0501 03:51:39.396791   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:51:39.435731   13472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:51:39.486716   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:51:39.527205   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:51:39.568917   13472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 03:51:39.634434   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:51:39.659131   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:51:39.697374   13472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 03:51:39.713166   13472 ssh_runner.go:195] Run: which cri-dockerd
	I0501 03:51:39.720018   13472 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 03:51:39.735940   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 03:51:39.755547   13472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 03:51:39.807545   13472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 03:51:40.029304   13472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 03:51:40.258959   13472 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 03:51:40.259120   13472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 03:51:40.309455   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:51:40.545819   13472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 03:51:43.105063   13472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5592249s)
	I0501 03:51:43.119527   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 03:51:43.160071   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 03:51:43.198300   13472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 03:51:43.412564   13472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 03:51:43.629440   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:51:43.845310   13472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 03:51:43.893732   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 03:51:43.934611   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:51:44.139313   13472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 03:51:44.269228   13472 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 03:51:44.284323   13472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 03:51:44.294625   13472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0501 03:51:44.294625   13472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 03:51:44.294625   13472 command_runner.go:130] > Device: 0,22	Inode: 888         Links: 1
	I0501 03:51:44.294625   13472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0501 03:51:44.294625   13472 command_runner.go:130] > Access: 2024-05-01 03:51:44.170348890 +0000
	I0501 03:51:44.294625   13472 command_runner.go:130] > Modify: 2024-05-01 03:51:44.170348890 +0000
	I0501 03:51:44.294625   13472 command_runner.go:130] > Change: 2024-05-01 03:51:44.175348898 +0000
	I0501 03:51:44.294625   13472 command_runner.go:130] >  Birth: -
	I0501 03:51:44.295042   13472 start.go:562] Will wait 60s for crictl version
	I0501 03:51:44.309593   13472 ssh_runner.go:195] Run: which crictl
	I0501 03:51:44.317822   13472 command_runner.go:130] > /usr/bin/crictl
	I0501 03:51:44.332247   13472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:51:44.396888   13472 command_runner.go:130] > Version:  0.1.0
	I0501 03:51:44.396888   13472 command_runner.go:130] > RuntimeName:  docker
	I0501 03:51:44.396888   13472 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0501 03:51:44.396888   13472 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 03:51:44.396888   13472 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 03:51:44.407782   13472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 03:51:44.445758   13472 command_runner.go:130] > 26.0.2
	I0501 03:51:44.456471   13472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 03:51:44.488720   13472 command_runner.go:130] > 26.0.2
	I0501 03:51:44.493385   13472 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 03:51:44.493414   13472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 03:51:44.497992   13472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 03:51:44.497992   13472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 03:51:44.497992   13472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 03:51:44.497992   13472 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 03:51:44.500398   13472 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 03:51:44.500398   13472 ip.go:210] interface addr: 172.28.208.1/20
	I0501 03:51:44.514725   13472 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 03:51:44.521389   13472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:51:44.546986   13472 kubeadm.go:877] updating cluster {Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 03:51:44.547606   13472 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 03:51:44.557654   13472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 03:51:44.582910   13472 docker.go:685] Got preloaded images: 
	I0501 03:51:44.582910   13472 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0501 03:51:44.597351   13472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 03:51:44.619564   13472 command_runner.go:139] > {"Repositories":{}}
	I0501 03:51:44.633176   13472 ssh_runner.go:195] Run: which lz4
	I0501 03:51:44.639661   13472 command_runner.go:130] > /usr/bin/lz4
	I0501 03:51:44.639661   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0501 03:51:44.663596   13472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0501 03:51:44.671265   13472 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:51:44.671539   13472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0501 03:51:44.671746   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0501 03:51:46.465174   13472 docker.go:649] duration metric: took 1.8146507s to copy over tarball
	I0501 03:51:46.482920   13472 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0501 03:51:55.411495   13472 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9285089s)
	I0501 03:51:55.411555   13472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0501 03:51:55.480902   13472 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0501 03:51:55.501313   13472 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0501 03:51:55.501313   13472 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0501 03:51:55.549533   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:51:55.782626   13472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 03:51:59.194342   13472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4116904s)
	I0501 03:51:59.204484   13472 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 03:51:59.231298   13472 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 03:51:59.231298   13472 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:51:59.232304   13472 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0501 03:51:59.232304   13472 cache_images.go:84] Images are preloaded, skipping loading
	I0501 03:51:59.232304   13472 kubeadm.go:928] updating node { 172.28.209.152 8443 v1.30.0 docker true true} ...
	I0501 03:51:59.232304   13472 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-289800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.209.152
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:51:59.242296   13472 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 03:51:59.276820   13472 command_runner.go:130] > cgroupfs
	I0501 03:51:59.277199   13472 cni.go:84] Creating CNI manager for ""
	I0501 03:51:59.277199   13472 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 03:51:59.277199   13472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 03:51:59.277199   13472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.209.152 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-289800 NodeName:multinode-289800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.209.152"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.209.152 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 03:51:59.277199   13472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.209.152
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-289800"
	  kubeletExtraArgs:
	    node-ip: 172.28.209.152
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.209.152"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 03:51:59.291530   13472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:51:59.309539   13472 command_runner.go:130] > kubeadm
	I0501 03:51:59.309539   13472 command_runner.go:130] > kubectl
	I0501 03:51:59.310543   13472 command_runner.go:130] > kubelet
	I0501 03:51:59.310596   13472 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 03:51:59.322632   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 03:51:59.339895   13472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 03:51:59.376565   13472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:51:59.407819   13472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0501 03:51:59.452818   13472 ssh_runner.go:195] Run: grep 172.28.209.152	control-plane.minikube.internal$ /etc/hosts
	I0501 03:51:59.458997   13472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.209.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:51:59.494437   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:51:59.716159   13472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:51:59.748686   13472 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800 for IP: 172.28.209.152
	I0501 03:51:59.748822   13472 certs.go:194] generating shared ca certs ...
	I0501 03:51:59.748926   13472 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:51:59.772276   13472 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 03:51:59.790487   13472 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 03:51:59.790487   13472 certs.go:256] generating profile certs ...
	I0501 03:51:59.791472   13472 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.key
	I0501 03:51:59.791472   13472 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.crt with IP's: []
	I0501 03:52:00.138092   13472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.crt ...
	I0501 03:52:00.138092   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.crt: {Name:mk378a61c6c3ce17fe52c425b3b372477c67bad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:00.140169   13472 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.key ...
	I0501 03:52:00.140169   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.key: {Name:mk3abe0545440de87c2612d27db29808ae9939c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:00.140701   13472 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.de45c887
	I0501 03:52:00.140701   13472 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.de45c887 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.209.152]
	I0501 03:52:00.614458   13472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.de45c887 ...
	I0501 03:52:00.614458   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.de45c887: {Name:mkb661c69732b234d6ebeb8f5a89ce457e77a97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:00.616615   13472 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.de45c887 ...
	I0501 03:52:00.616615   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.de45c887: {Name:mk966f7831bd3bc7f019f3dbbc0ffde330feb30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:00.618005   13472 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.de45c887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt
	I0501 03:52:00.631531   13472 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.de45c887 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key
	I0501 03:52:00.632383   13472 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key
	I0501 03:52:00.633396   13472 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt with IP's: []
	I0501 03:52:00.800347   13472 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt ...
	I0501 03:52:00.800347   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt: {Name:mk0375a1da62fa97cf609bfabfb21717e56909f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:00.802243   13472 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key ...
	I0501 03:52:00.802243   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key: {Name:mka3d37e752806711635470056ff20ef7847cf0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:00.803581   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 03:52:00.803857   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 03:52:00.803917   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 03:52:00.803917   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 03:52:00.803917   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 03:52:00.803917   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 03:52:00.803917   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 03:52:00.814678   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 03:52:00.815738   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 03:52:00.826745   13472 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 03:52:00.826916   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 03:52:00.827360   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 03:52:00.827692   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 03:52:00.828200   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 03:52:00.828842   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 03:52:00.829104   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:52:00.829267   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 03:52:00.829436   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 03:52:00.830941   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:52:00.879248   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:52:00.935588   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:52:00.986628   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 03:52:01.039245   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 03:52:01.103617   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 03:52:01.164148   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 03:52:01.213305   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 03:52:01.264780   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:52:01.314222   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 03:52:01.367022   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 03:52:01.414954   13472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 03:52:01.461384   13472 ssh_runner.go:195] Run: openssl version
	I0501 03:52:01.470321   13472 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 03:52:01.483904   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 03:52:01.519015   13472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 03:52:01.525276   13472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 03:52:01.526177   13472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 03:52:01.538972   13472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 03:52:01.548044   13472 command_runner.go:130] > 3ec20f2e
	I0501 03:52:01.560789   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:52:01.599733   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:52:01.638810   13472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:52:01.646276   13472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:52:01.646319   13472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:52:01.659446   13472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:52:01.669176   13472 command_runner.go:130] > b5213941
	I0501 03:52:01.682552   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:52:01.721185   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 03:52:01.756330   13472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 03:52:01.766437   13472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 03:52:01.766437   13472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 03:52:01.780402   13472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 03:52:01.789865   13472 command_runner.go:130] > 51391683
	I0501 03:52:01.803847   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 03:52:01.836742   13472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:52:01.842173   13472 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:52:01.842577   13472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:52:01.842872   13472 kubeadm.go:391] StartCluster: {Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:52:01.853817   13472 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 03:52:01.891391   13472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 03:52:01.911174   13472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0501 03:52:01.911174   13472 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0501 03:52:01.911174   13472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0501 03:52:01.925109   13472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 03:52:01.957227   13472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 03:52:01.976195   13472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0501 03:52:01.976195   13472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0501 03:52:01.976195   13472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0501 03:52:01.976195   13472 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:52:01.976415   13472 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 03:52:01.976415   13472 kubeadm.go:156] found existing configuration files:
	
	I0501 03:52:01.993271   13472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 03:52:02.010822   13472 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:52:02.011461   13472 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 03:52:02.023006   13472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 03:52:02.057712   13472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 03:52:02.075609   13472 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:52:02.075881   13472 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 03:52:02.088260   13472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 03:52:02.118239   13472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 03:52:02.135705   13472 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:52:02.135705   13472 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 03:52:02.147697   13472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 03:52:02.177438   13472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 03:52:02.195410   13472 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:52:02.196402   13472 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 03:52:02.208404   13472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 03:52:02.228428   13472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0501 03:52:02.711799   13472 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:52:02.711799   13472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:52:16.170501   13472 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 03:52:16.170501   13472 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0501 03:52:16.170641   13472 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 03:52:16.170641   13472 command_runner.go:130] > [preflight] Running pre-flight checks
	I0501 03:52:16.170715   13472 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:52:16.170715   13472 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 03:52:16.170715   13472 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:52:16.170715   13472 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 03:52:16.171249   13472 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:52:16.171249   13472 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 03:52:16.171249   13472 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:52:16.174320   13472 out.go:204]   - Generating certificates and keys ...
	I0501 03:52:16.171627   13472 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 03:52:16.174498   13472 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 03:52:16.174498   13472 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0501 03:52:16.174862   13472 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 03:52:16.174862   13472 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0501 03:52:16.174862   13472 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:52:16.174862   13472 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 03:52:16.174862   13472 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:52:16.174862   13472 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0501 03:52:16.174862   13472 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0501 03:52:16.174862   13472 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 03:52:16.175507   13472 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0501 03:52:16.175507   13472 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 03:52:16.175609   13472 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0501 03:52:16.175609   13472 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 03:52:16.175768   13472 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-289800] and IPs [172.28.209.152 127.0.0.1 ::1]
	I0501 03:52:16.175768   13472 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-289800] and IPs [172.28.209.152 127.0.0.1 ::1]
	I0501 03:52:16.175768   13472 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0501 03:52:16.175768   13472 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 03:52:16.176453   13472 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-289800] and IPs [172.28.209.152 127.0.0.1 ::1]
	I0501 03:52:16.176453   13472 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-289800] and IPs [172.28.209.152 127.0.0.1 ::1]
	I0501 03:52:16.176628   13472 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:52:16.176628   13472 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 03:52:16.176718   13472 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:52:16.176806   13472 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 03:52:16.176966   13472 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 03:52:16.176966   13472 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0501 03:52:16.177115   13472 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:52:16.177115   13472 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 03:52:16.177291   13472 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:52:16.177291   13472 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 03:52:16.177439   13472 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:52:16.177439   13472 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 03:52:16.177852   13472 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:52:16.177852   13472 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 03:52:16.177852   13472 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:52:16.177852   13472 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 03:52:16.177852   13472 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:52:16.177852   13472 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 03:52:16.177852   13472 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:52:16.177852   13472 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 03:52:16.178441   13472 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:52:16.178629   13472 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 03:52:16.181254   13472 out.go:204]   - Booting up control plane ...
	I0501 03:52:16.181363   13472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:52:16.181363   13472 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 03:52:16.181363   13472 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:52:16.181363   13472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 03:52:16.181363   13472 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:52:16.181999   13472 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 03:52:16.181999   13472 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:52:16.181999   13472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:52:16.181999   13472 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:52:16.181999   13472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:52:16.182852   13472 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 03:52:16.182852   13472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0501 03:52:16.182852   13472 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:52:16.182852   13472 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 03:52:16.182852   13472 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:52:16.182852   13472 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:52:16.183443   13472 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002999976s
	I0501 03:52:16.183443   13472 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002999976s
	I0501 03:52:16.183618   13472 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:52:16.183618   13472 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 03:52:16.183618   13472 kubeadm.go:309] [api-check] The API server is healthy after 7.002491518s
	I0501 03:52:16.183618   13472 command_runner.go:130] > [api-check] The API server is healthy after 7.002491518s
	I0501 03:52:16.183618   13472 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:52:16.183618   13472 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 03:52:16.184216   13472 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:52:16.184216   13472 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 03:52:16.184216   13472 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:52:16.184216   13472 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0501 03:52:16.184816   13472 kubeadm.go:309] [mark-control-plane] Marking the node multinode-289800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:52:16.184816   13472 command_runner.go:130] > [mark-control-plane] Marking the node multinode-289800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 03:52:16.184816   13472 command_runner.go:130] > [bootstrap-token] Using token: a5pz2m.l4exy2yct983605d
	I0501 03:52:16.184816   13472 kubeadm.go:309] [bootstrap-token] Using token: a5pz2m.l4exy2yct983605d
	I0501 03:52:16.187553   13472 out.go:204]   - Configuring RBAC rules ...
	I0501 03:52:16.187812   13472 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:52:16.187892   13472 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 03:52:16.188093   13472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:52:16.188093   13472 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 03:52:16.188093   13472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:52:16.188093   13472 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 03:52:16.188093   13472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:52:16.188648   13472 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 03:52:16.188798   13472 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:52:16.188798   13472 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 03:52:16.188798   13472 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:52:16.188798   13472 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 03:52:16.189490   13472 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:52:16.189490   13472 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 03:52:16.189490   13472 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 03:52:16.189490   13472 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0501 03:52:16.189490   13472 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 03:52:16.189490   13472 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0501 03:52:16.189490   13472 kubeadm.go:309] 
	I0501 03:52:16.189490   13472 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 03:52:16.190034   13472 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0501 03:52:16.190073   13472 kubeadm.go:309] 
	I0501 03:52:16.190073   13472 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 03:52:16.190073   13472 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0501 03:52:16.190073   13472 kubeadm.go:309] 
	I0501 03:52:16.190073   13472 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0501 03:52:16.190073   13472 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 03:52:16.190073   13472 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:52:16.190073   13472 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 03:52:16.190073   13472 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:52:16.190073   13472 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 03:52:16.190073   13472 kubeadm.go:309] 
	I0501 03:52:16.190073   13472 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 03:52:16.191109   13472 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0501 03:52:16.191109   13472 kubeadm.go:309] 
	I0501 03:52:16.191261   13472 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:52:16.191261   13472 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 03:52:16.191261   13472 kubeadm.go:309] 
	I0501 03:52:16.191261   13472 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 03:52:16.191261   13472 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0501 03:52:16.191261   13472 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:52:16.191261   13472 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 03:52:16.191261   13472 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:52:16.191261   13472 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 03:52:16.191261   13472 kubeadm.go:309] 
	I0501 03:52:16.191261   13472 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:52:16.191261   13472 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0501 03:52:16.192206   13472 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 03:52:16.192206   13472 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0501 03:52:16.192206   13472 kubeadm.go:309] 
	I0501 03:52:16.192206   13472 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a5pz2m.l4exy2yct983605d \
	I0501 03:52:16.192206   13472 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token a5pz2m.l4exy2yct983605d \
	I0501 03:52:16.192206   13472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 03:52:16.192206   13472 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 03:52:16.192206   13472 command_runner.go:130] > 	--control-plane 
	I0501 03:52:16.192206   13472 kubeadm.go:309] 	--control-plane 
	I0501 03:52:16.192206   13472 kubeadm.go:309] 
	I0501 03:52:16.192206   13472 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:52:16.192206   13472 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0501 03:52:16.192206   13472 kubeadm.go:309] 
	I0501 03:52:16.192206   13472 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a5pz2m.l4exy2yct983605d \
	I0501 03:52:16.192206   13472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a5pz2m.l4exy2yct983605d \
	I0501 03:52:16.193236   13472 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 03:52:16.193236   13472 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 03:52:16.193236   13472 cni.go:84] Creating CNI manager for ""
	I0501 03:52:16.193236   13472 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0501 03:52:16.197620   13472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 03:52:16.215384   13472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 03:52:16.224841   13472 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0501 03:52:16.224841   13472 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0501 03:52:16.224841   13472 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0501 03:52:16.224841   13472 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 03:52:16.224841   13472 command_runner.go:130] > Access: 2024-05-01 03:50:20.136136500 +0000
	I0501 03:52:16.224841   13472 command_runner.go:130] > Modify: 2024-04-30 23:29:30.000000000 +0000
	I0501 03:52:16.224841   13472 command_runner.go:130] > Change: 2024-05-01 03:50:10.154000000 +0000
	I0501 03:52:16.224841   13472 command_runner.go:130] >  Birth: -
	I0501 03:52:16.224841   13472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 03:52:16.224841   13472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 03:52:16.286576   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 03:52:16.915465   13472 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0501 03:52:16.948468   13472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0501 03:52:16.976466   13472 command_runner.go:130] > serviceaccount/kindnet created
	I0501 03:52:17.005105   13472 command_runner.go:130] > daemonset.apps/kindnet created
	I0501 03:52:17.008493   13472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 03:52:17.024221   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-289800 minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=multinode-289800 minikube.k8s.io/primary=true
	I0501 03:52:17.028036   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:17.049967   13472 command_runner.go:130] > -16
	I0501 03:52:17.051153   13472 ops.go:34] apiserver oom_adj: -16
	I0501 03:52:17.259387   13472 command_runner.go:130] > node/multinode-289800 labeled
	I0501 03:52:17.259486   13472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0501 03:52:17.273735   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:17.396716   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:17.787652   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:17.924155   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:18.279864   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:18.403839   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:18.779498   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:18.908710   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:19.277995   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:19.414433   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:19.788301   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:19.913645   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:20.288314   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:20.405905   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:20.788960   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:20.920959   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:21.275905   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:21.402236   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:21.781371   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:21.899399   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:22.286237   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:22.403799   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:22.788277   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:22.915555   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:23.276657   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:23.398379   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:23.780797   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:23.903540   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:24.287078   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:24.406665   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:24.791290   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:24.917639   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:25.276899   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:25.390822   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:25.777087   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:25.913885   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:26.279230   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:26.390708   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:26.784734   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:26.897823   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:27.284946   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:27.429997   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:27.789424   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:27.921003   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:28.275666   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:28.423718   13472 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0501 03:52:28.779650   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 03:52:28.904287   13472 command_runner.go:130] > NAME      SECRETS   AGE
	I0501 03:52:28.904353   13472 command_runner.go:130] > default   0         0s
	I0501 03:52:28.904353   13472 kubeadm.go:1107] duration metric: took 11.8957092s to wait for elevateKubeSystemPrivileges
	W0501 03:52:28.904353   13472 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 03:52:28.904353   13472 kubeadm.go:393] duration metric: took 27.0612789s to StartCluster
	I0501 03:52:28.904353   13472 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:28.904353   13472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 03:52:28.905671   13472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:52:28.907597   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 03:52:28.907766   13472 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 03:52:28.911057   13472 out.go:177] * Verifying Kubernetes components...
	I0501 03:52:28.907766   13472 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 03:52:28.908190   13472 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:52:28.911225   13472 addons.go:69] Setting storage-provisioner=true in profile "multinode-289800"
	I0501 03:52:28.911241   13472 addons.go:69] Setting default-storageclass=true in profile "multinode-289800"
	I0501 03:52:28.915042   13472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-289800"
	I0501 03:52:28.911241   13472 addons.go:234] Setting addon storage-provisioner=true in "multinode-289800"
	I0501 03:52:28.915042   13472 host.go:66] Checking if "multinode-289800" exists ...
	I0501 03:52:28.915969   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:52:28.916035   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:52:28.928746   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:52:29.161090   13472 command_runner.go:130] > apiVersion: v1
	I0501 03:52:29.161090   13472 command_runner.go:130] > data:
	I0501 03:52:29.161201   13472 command_runner.go:130] >   Corefile: |
	I0501 03:52:29.161201   13472 command_runner.go:130] >     .:53 {
	I0501 03:52:29.161201   13472 command_runner.go:130] >         errors
	I0501 03:52:29.161201   13472 command_runner.go:130] >         health {
	I0501 03:52:29.161201   13472 command_runner.go:130] >            lameduck 5s
	I0501 03:52:29.161201   13472 command_runner.go:130] >         }
	I0501 03:52:29.161201   13472 command_runner.go:130] >         ready
	I0501 03:52:29.161201   13472 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0501 03:52:29.161201   13472 command_runner.go:130] >            pods insecure
	I0501 03:52:29.161201   13472 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0501 03:52:29.161291   13472 command_runner.go:130] >            ttl 30
	I0501 03:52:29.161291   13472 command_runner.go:130] >         }
	I0501 03:52:29.161291   13472 command_runner.go:130] >         prometheus :9153
	I0501 03:52:29.161291   13472 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0501 03:52:29.161291   13472 command_runner.go:130] >            max_concurrent 1000
	I0501 03:52:29.161291   13472 command_runner.go:130] >         }
	I0501 03:52:29.161291   13472 command_runner.go:130] >         cache 30
	I0501 03:52:29.161362   13472 command_runner.go:130] >         loop
	I0501 03:52:29.161362   13472 command_runner.go:130] >         reload
	I0501 03:52:29.161393   13472 command_runner.go:130] >         loadbalance
	I0501 03:52:29.161480   13472 command_runner.go:130] >     }
	I0501 03:52:29.161480   13472 command_runner.go:130] > kind: ConfigMap
	I0501 03:52:29.161480   13472 command_runner.go:130] > metadata:
	I0501 03:52:29.161480   13472 command_runner.go:130] >   creationTimestamp: "2024-05-01T03:52:15Z"
	I0501 03:52:29.161561   13472 command_runner.go:130] >   name: coredns
	I0501 03:52:29.161561   13472 command_runner.go:130] >   namespace: kube-system
	I0501 03:52:29.161561   13472 command_runner.go:130] >   resourceVersion: "231"
	I0501 03:52:29.161561   13472 command_runner.go:130] >   uid: 674d5eaf-1063-431a-8310-a65c5bb0f2a6
	I0501 03:52:29.161839   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 03:52:29.307054   13472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:52:29.760149   13472 command_runner.go:130] > configmap/coredns replaced
	I0501 03:52:29.763368   13472 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 03:52:29.764889   13472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 03:52:29.765330   13472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 03:52:29.766032   13472 kapi.go:59] client config for multinode-289800: &rest.Config{Host:"https://172.28.209.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 03:52:29.766488   13472 kapi.go:59] client config for multinode-289800: &rest.Config{Host:"https://172.28.209.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 03:52:29.767782   13472 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 03:52:29.768392   13472 node_ready.go:35] waiting up to 6m0s for node "multinode-289800" to be "Ready" ...
	I0501 03:52:29.768633   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:29.768719   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:29.768633   13472 round_trippers.go:463] GET https://172.28.209.152:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0501 03:52:29.768719   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:29.768719   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:29.768719   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:29.768719   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:29.768719   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:29.802149   13472 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0501 03:52:29.802149   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:29.802149   13472 round_trippers.go:580]     Audit-Id: 151d9aa9-2900-4039-8323-77c9dc08707d
	I0501 03:52:29.802149   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:29.802149   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:29.802149   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:29.802149   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:29.802149   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:29 GMT
	I0501 03:52:29.803344   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:29.805264   13472 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0501 03:52:29.805357   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:29.805357   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:29 GMT
	I0501 03:52:29.805357   13472 round_trippers.go:580]     Audit-Id: beb464bb-81fe-49e2-80f5-0d9e9dbb0530
	I0501 03:52:29.805451   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:29.805451   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:29.805451   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:29.805512   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:29.805563   13472 round_trippers.go:580]     Content-Length: 291
	I0501 03:52:29.805563   13472 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e293ff8c-f8e6-4464-82c6-0a01a4d80fb4","resourceVersion":"317","creationTimestamp":"2024-05-01T03:52:15Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0501 03:52:29.805965   13472 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e293ff8c-f8e6-4464-82c6-0a01a4d80fb4","resourceVersion":"317","creationTimestamp":"2024-05-01T03:52:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0501 03:52:29.806616   13472 round_trippers.go:463] PUT https://172.28.209.152:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0501 03:52:29.806674   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:29.806674   13472 round_trippers.go:473]     Content-Type: application/json
	I0501 03:52:29.806674   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:29.806674   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:29.827277   13472 round_trippers.go:574] Response Status: 409 Conflict in 20 milliseconds
	I0501 03:52:29.827277   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:29.827277   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:29 GMT
	I0501 03:52:29.827277   13472 round_trippers.go:580]     Audit-Id: dc92a93a-2277-491d-8abf-ae247dc0e523
	I0501 03:52:29.827277   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:29.827277   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:29.827277   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:29.827277   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:29.827277   13472 round_trippers.go:580]     Content-Length: 332
	I0501 03:52:29.827277   13472 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again","reason":"Conflict","details":{"name":"coredns","group":"apps","kind":"deployments"},"code":409}
	W0501 03:52:29.827951   13472 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "multinode-289800" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0501 03:52:29.827951   13472 start.go:159] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0501 03:52:30.277675   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:30.277675   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:30.277675   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:30.277675   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:30.282241   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:30.282859   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:30.282859   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:30.282859   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:30.282859   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:30.282859   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:30.282859   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:30 GMT
	I0501 03:52:30.282859   13472 round_trippers.go:580]     Audit-Id: 7659961c-6387-4cd2-b086-e68c0e2aa56a
	I0501 03:52:30.283392   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:30.771786   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:30.771869   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:30.771869   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:30.771869   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:30.784096   13472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 03:52:30.784096   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:30.784096   13472 round_trippers.go:580]     Audit-Id: c17082ea-a99b-4b4b-b1a4-0ba784cb5d9b
	I0501 03:52:30.784096   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:30.784096   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:30.784096   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:30.784096   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:30.784096   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:30 GMT
	I0501 03:52:30.784096   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:31.149188   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:52:31.149291   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:31.149291   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:52:31.149291   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:31.152794   13472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 03:52:31.150195   13472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 03:52:31.155578   13472 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:52:31.155578   13472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 03:52:31.155578   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:52:31.155578   13472 kapi.go:59] client config for multinode-289800: &rest.Config{Host:"https://172.28.209.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 03:52:31.156323   13472 addons.go:234] Setting addon default-storageclass=true in "multinode-289800"
	I0501 03:52:31.156849   13472 host.go:66] Checking if "multinode-289800" exists ...
	I0501 03:52:31.157670   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:52:31.280713   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:31.280993   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:31.280993   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:31.280993   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:31.285991   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:31.285991   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:31.285991   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:31.286126   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:31.286126   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:31 GMT
	I0501 03:52:31.286126   13472 round_trippers.go:580]     Audit-Id: c6d9c7c0-8227-46d6-9e50-1aabf59f0c21
	I0501 03:52:31.286126   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:31.286126   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:31.286497   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:31.773793   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:31.773885   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:31.773885   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:31.773885   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:31.778337   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:31.778337   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:31.779252   13472 round_trippers.go:580]     Audit-Id: 2911fde0-58ea-4de3-9cdb-60ba6050ffa9
	I0501 03:52:31.779252   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:31.779252   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:31.779252   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:31.779371   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:31.779371   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:31 GMT
	I0501 03:52:31.779492   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:31.780137   13472 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 03:52:32.281940   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:32.282108   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:32.282108   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:32.282174   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:32.287229   13472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 03:52:32.287229   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:32.287229   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:32.287676   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:32.287676   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:32.287676   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:32 GMT
	I0501 03:52:32.287676   13472 round_trippers.go:580]     Audit-Id: d01cdd83-11cb-40bd-86bb-3aaa4ed60a31
	I0501 03:52:32.287676   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:32.291269   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:32.772916   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:32.772996   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:32.772996   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:32.773066   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:32.777801   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:32.777801   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:32.778163   13472 round_trippers.go:580]     Audit-Id: 76409805-272b-4ec3-b809-3529f7b11ab2
	I0501 03:52:32.778163   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:32.778163   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:32.778163   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:32.778163   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:32.778163   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:32 GMT
	I0501 03:52:32.778725   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:33.279298   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:33.279498   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:33.279498   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:33.279498   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:33.282801   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:33.283819   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:33.283819   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:33.283819   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:33.283819   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:33.283819   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:33.283819   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:33 GMT
	I0501 03:52:33.283819   13472 round_trippers.go:580]     Audit-Id: 09a81e16-6be2-46d6-af59-e933ffe67990
	I0501 03:52:33.283819   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:33.475084   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:52:33.475151   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:33.475213   13472 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 03:52:33.475281   13472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 03:52:33.475591   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:52:33.502666   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:52:33.502666   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:33.502666   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:52:33.769715   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:33.769947   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:33.769947   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:33.770061   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:33.773507   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:33.773863   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:33.773863   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:33.773863   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:33.773863   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:33.773863   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:33.773863   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:33 GMT
	I0501 03:52:33.773863   13472 round_trippers.go:580]     Audit-Id: 70e04495-cc57-4153-a01a-16379f7e984e
	I0501 03:52:33.774575   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:34.274992   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:34.274992   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:34.275057   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:34.275057   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:34.279414   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:34.279616   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:34.279616   13472 round_trippers.go:580]     Audit-Id: 888925a7-80b5-44af-8d12-71fdc5a1a4a0
	I0501 03:52:34.279616   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:34.279616   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:34.279616   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:34.279704   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:34.279704   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:34 GMT
	I0501 03:52:34.280117   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:34.280771   13472 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 03:52:34.780610   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:34.780783   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:34.780783   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:34.780783   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:34.785591   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:34.785591   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:34.785591   13472 round_trippers.go:580]     Audit-Id: 9b849eb6-cdb1-4dcd-b08e-65e5893fbafe
	I0501 03:52:34.785591   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:34.785591   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:34.785591   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:34.785591   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:34.785591   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:34 GMT
	I0501 03:52:34.785591   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:35.272008   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:35.272008   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:35.272008   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:35.272008   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:35.275021   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:35.275021   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:35.275021   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:35.275021   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:35.275021   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:35 GMT
	I0501 03:52:35.275736   13472 round_trippers.go:580]     Audit-Id: 3a65390d-6c9f-4645-a9b5-e46f7ee7900a
	I0501 03:52:35.275736   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:35.275736   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:35.276401   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:35.734849   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:52:35.735643   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:35.735643   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:52:35.779035   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:35.779035   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:35.779296   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:35.779296   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:35.782945   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:35.782945   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:35.782945   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:35.783025   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:35 GMT
	I0501 03:52:35.783025   13472 round_trippers.go:580]     Audit-Id: 285b0c78-5054-4aed-8ffc-d4e40b320933
	I0501 03:52:35.783025   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:35.783025   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:35.783025   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:35.783289   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:36.163376   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:52:36.163537   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:36.164229   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:52:36.269470   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:36.269578   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:36.269578   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:36.269578   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:36.274062   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:36.274062   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:36.274062   13472 round_trippers.go:580]     Audit-Id: b6634a66-5ae1-4c0e-ad70-3bea34ba594b
	I0501 03:52:36.274062   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:36.274062   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:36.274062   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:36.274154   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:36.274154   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:36 GMT
	I0501 03:52:36.274441   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:36.341095   13472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 03:52:36.775032   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:36.775283   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:36.775283   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:36.775283   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:36.777043   13472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 03:52:36.778087   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:36.778087   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:36.778087   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:36.778087   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:36.778087   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:36.778087   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:36 GMT
	I0501 03:52:36.778087   13472 round_trippers.go:580]     Audit-Id: 2a5e88d3-1c99-4fc1-94b0-14f2bfe7e6c5
	I0501 03:52:36.778442   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:36.778442   13472 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 03:52:37.270076   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:37.270076   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:37.270076   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:37.270076   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:37.272082   13472 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0501 03:52:37.272082   13472 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0501 03:52:37.272082   13472 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0501 03:52:37.272082   13472 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0501 03:52:37.272082   13472 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0501 03:52:37.272082   13472 command_runner.go:130] > pod/storage-provisioner created
	I0501 03:52:37.274941   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:37.275042   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:37.275042   13472 round_trippers.go:580]     Audit-Id: b30256ca-9ebf-4133-8e6b-7d3cb44e8ae0
	I0501 03:52:37.275042   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:37.275042   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:37.275042   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:37.275042   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:37.275042   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:37 GMT
	I0501 03:52:37.275298   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:37.783731   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:37.784178   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:37.784178   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:37.784271   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:37.788933   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:37.788933   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:37.788933   13472 round_trippers.go:580]     Audit-Id: ee9dd195-1051-493a-84ea-7aae880e4319
	I0501 03:52:37.788933   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:37.788933   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:37.788933   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:37.788933   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:37.789192   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:37 GMT
	I0501 03:52:37.789447   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:38.279333   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:38.279333   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:38.279333   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:38.279333   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:38.282465   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:38.282465   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:38.282465   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:38 GMT
	I0501 03:52:38.282465   13472 round_trippers.go:580]     Audit-Id: 92d1c824-4830-4293-9f37-939b9a89bb16
	I0501 03:52:38.282465   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:38.282568   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:38.282568   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:38.282568   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:38.282801   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:38.431853   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:52:38.432017   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:38.432496   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:52:38.587469   13472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 03:52:38.742577   13472 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0501 03:52:38.742577   13472 round_trippers.go:463] GET https://172.28.209.152:8443/apis/storage.k8s.io/v1/storageclasses
	I0501 03:52:38.742577   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:38.742577   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:38.742577   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:38.745614   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:38.745614   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:38.746501   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:38.746501   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:38.746501   13472 round_trippers.go:580]     Content-Length: 1273
	I0501 03:52:38.746501   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:38 GMT
	I0501 03:52:38.746501   13472 round_trippers.go:580]     Audit-Id: 6ff6535e-647e-4bb2-8ecf-a14b2aa3cb30
	I0501 03:52:38.746501   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:38.746501   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:38.746571   13472 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"375"},"items":[{"metadata":{"name":"standard","uid":"4008f8f7-abec-49ab-ab36-9636f465e69d","resourceVersion":"375","creationTimestamp":"2024-05-01T03:52:38Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-01T03:52:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0501 03:52:38.747243   13472 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4008f8f7-abec-49ab-ab36-9636f465e69d","resourceVersion":"375","creationTimestamp":"2024-05-01T03:52:38Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-01T03:52:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0501 03:52:38.747243   13472 round_trippers.go:463] PUT https://172.28.209.152:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0501 03:52:38.747243   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:38.747243   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:38.747243   13472 round_trippers.go:473]     Content-Type: application/json
	I0501 03:52:38.747243   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:38.750843   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:38.750843   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:38.750843   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:38.750843   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:38.750843   13472 round_trippers.go:580]     Content-Length: 1220
	I0501 03:52:38.750843   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:38 GMT
	I0501 03:52:38.750843   13472 round_trippers.go:580]     Audit-Id: 194d27d0-f073-4b1f-9da7-07c2631eb00b
	I0501 03:52:38.750843   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:38.750843   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:38.751827   13472 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4008f8f7-abec-49ab-ab36-9636f465e69d","resourceVersion":"375","creationTimestamp":"2024-05-01T03:52:38Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-01T03:52:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0501 03:52:38.754660   13472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 03:52:38.758571   13472 addons.go:505] duration metric: took 9.8507318s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 03:52:38.782479   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:38.782479   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:38.782479   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:38.782479   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:38.787092   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:38.787253   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:38.787253   13472 round_trippers.go:580]     Audit-Id: 58991330-438d-476a-917e-9c1fa4cb07f6
	I0501 03:52:38.787253   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:38.787253   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:38.787253   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:38.787253   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:38.787253   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:38 GMT
	I0501 03:52:38.787722   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:38.788089   13472 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 03:52:39.281986   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:39.281986   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:39.281986   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:39.281986   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:39.292029   13472 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0501 03:52:39.292780   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:39.292780   13472 round_trippers.go:580]     Audit-Id: 66d0c4a0-6448-4fa0-9a36-11937e98be03
	I0501 03:52:39.292780   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:39.292780   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:39.292780   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:39.292875   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:39.292875   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:39 GMT
	I0501 03:52:39.293115   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"318","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4929 chars]
	I0501 03:52:39.769629   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:39.769629   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:39.769629   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:39.769629   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:39.773971   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:39.774258   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:39.774258   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:39.774258   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:39.774258   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:39.774258   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:39.774258   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:39 GMT
	I0501 03:52:39.774258   13472 round_trippers.go:580]     Audit-Id: 20b56a4d-098d-44f0-ae47-7a9d2f2c14d9
	I0501 03:52:39.774527   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:39.774944   13472 node_ready.go:49] node "multinode-289800" has status "Ready":"True"
	I0501 03:52:39.775054   13472 node_ready.go:38] duration metric: took 10.0065864s for node "multinode-289800" to be "Ready" ...
	I0501 03:52:39.775054   13472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:52:39.775163   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods
	I0501 03:52:39.775163   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:39.775163   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:39.775300   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:39.781915   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:52:39.781915   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:39.781915   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:39.781915   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:39.781915   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:39 GMT
	I0501 03:52:39.781915   13472 round_trippers.go:580]     Audit-Id: a37f77c9-eedd-4d08-a363-d2b75b1f1513
	I0501 03:52:39.782536   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:39.782536   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:39.783741   13472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"387"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"384","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 63827 chars]
	I0501 03:52:39.789564   13472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:39.789738   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 03:52:39.789824   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:39.789824   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:39.789880   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:39.796309   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:52:39.796309   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:39.796309   13472 round_trippers.go:580]     Audit-Id: d652449f-a344-4cf6-8db2-dd0408a7eb4e
	I0501 03:52:39.796309   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:39.796309   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:39.796309   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:39.796309   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:39.796309   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:39 GMT
	I0501 03:52:39.796309   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"384","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0501 03:52:39.798021   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:39.798021   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:39.798021   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:39.798021   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:39.802385   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:39.802385   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:39.802385   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:39.802385   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:39.802385   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:39 GMT
	I0501 03:52:39.802385   13472 round_trippers.go:580]     Audit-Id: 26ee72b8-614b-418a-b6f6-b940cef7c50b
	I0501 03:52:39.802385   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:39.802385   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:39.802385   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:40.295377   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 03:52:40.295461   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:40.295461   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:40.295461   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:40.301930   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:52:40.301930   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:40.301930   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:40.302369   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:40 GMT
	I0501 03:52:40.302369   13472 round_trippers.go:580]     Audit-Id: aac6b7ae-b6a5-4a68-b696-8e97167538ea
	I0501 03:52:40.302369   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:40.302369   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:40.302369   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:40.302448   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"384","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0501 03:52:40.303237   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:40.303237   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:40.303237   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:40.303237   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:40.307038   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:40.307920   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:40.307920   13472 round_trippers.go:580]     Audit-Id: 086ab5f3-a960-495e-9b9d-066e305aff04
	I0501 03:52:40.307920   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:40.307920   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:40.307920   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:40.307920   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:40.307920   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:40 GMT
	I0501 03:52:40.308168   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:40.790141   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 03:52:40.790217   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:40.790217   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:40.790217   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:40.794801   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:40.795014   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:40.795113   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:40 GMT
	I0501 03:52:40.795188   13472 round_trippers.go:580]     Audit-Id: 13f234ee-2d55-436b-9181-8fe4a75be1f9
	I0501 03:52:40.795188   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:40.795188   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:40.795188   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:40.795188   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:40.795449   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"384","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0501 03:52:40.795800   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:40.795800   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:40.795800   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:40.795800   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:40.805795   13472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 03:52:40.805795   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:40.805795   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:40.805795   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:40.805795   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:40.805795   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:40.805795   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:40 GMT
	I0501 03:52:40.805795   13472 round_trippers.go:580]     Audit-Id: 3deb2dbe-b27d-4c26-a1fc-41c4be2bb2e5
	I0501 03:52:40.805795   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.294851   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 03:52:41.294928   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.294984   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.294984   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.298936   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:41.299047   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.299047   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.299047   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.299047   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.299047   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.299047   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.299047   13472 round_trippers.go:580]     Audit-Id: 4392102b-29e7-426e-8411-f77de6a3332c
	I0501 03:52:41.299465   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"384","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0501 03:52:41.300121   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:41.300121   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.300121   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.300121   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.303153   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:41.303153   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.303153   13472 round_trippers.go:580]     Audit-Id: 13edf246-2b13-4f36-b0bf-444a0545d832
	I0501 03:52:41.303153   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.303153   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.303925   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.303925   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.303925   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.304125   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.804562   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 03:52:41.804562   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.804630   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.804630   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.808024   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:41.808970   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.809033   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.809141   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.809141   13472 round_trippers.go:580]     Audit-Id: e914d192-7bfc-4b7d-8102-2376bff1af54
	I0501 03:52:41.809191   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.809191   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.809191   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.809191   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"407","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0501 03:52:41.809857   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:41.809857   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.809857   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.810384   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.813442   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:41.813442   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.813871   13472 round_trippers.go:580]     Audit-Id: 664de06a-a66c-4170-afe1-b66493d1d24f
	I0501 03:52:41.813871   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.813871   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.813871   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.813871   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.813871   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.814572   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.815043   13472 pod_ready.go:92] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:41.815122   13472 pod_ready.go:81] duration metric: took 2.0255424s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.815122   13472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.815367   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x9zrw
	I0501 03:52:41.815367   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.815367   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.815367   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.822687   13472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 03:52:41.822687   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.822687   13472 round_trippers.go:580]     Audit-Id: 890006d2-c7a3-442f-a207-7cf0cd4b6902
	I0501 03:52:41.822687   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.822687   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.822687   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.822687   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.822687   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.823458   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x9zrw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0b91b14d-bed3-4889-b193-db53daccd395","resourceVersion":"403","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0501 03:52:41.824036   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:41.824036   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.824036   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.824036   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.827468   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:41.827587   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.827587   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.827587   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.827587   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.827587   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.827587   13472 round_trippers.go:580]     Audit-Id: 4e148865-26bd-4110-b88d-600422aa76ab
	I0501 03:52:41.827587   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.828037   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.828463   13472 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:41.828520   13472 pod_ready.go:81] duration metric: took 13.3982ms for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.828520   13472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.828650   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-289800
	I0501 03:52:41.828650   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.828650   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.828650   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.830800   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:52:41.830800   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.830800   13472 round_trippers.go:580]     Audit-Id: d081faa8-b24c-4c49-a3f6-ccdf3993d310
	I0501 03:52:41.830800   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.830800   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.831535   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.831535   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.831535   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.831690   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"96a8cf0b-45bc-4636-9264-a0da579b5fa8","resourceVersion":"278","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.152:2379","kubernetes.io/config.hash":"c17e9f88f256f5527a6565eb2da75f63","kubernetes.io/config.mirror":"c17e9f88f256f5527a6565eb2da75f63","kubernetes.io/config.seen":"2024-05-01T03:52:15.688756845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0501 03:52:41.832657   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:41.832657   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.832721   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.832721   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.837566   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:41.837566   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.837647   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.837647   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.837647   13472 round_trippers.go:580]     Audit-Id: cce59182-ce8f-4861-b76e-ce072acabfdc
	I0501 03:52:41.837692   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.837692   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.837692   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.838457   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.838827   13472 pod_ready.go:92] pod "etcd-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:41.838827   13472 pod_ready.go:81] duration metric: took 10.3074ms for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.838827   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.838827   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-289800
	I0501 03:52:41.838827   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.838827   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.838827   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.841729   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:52:41.841729   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.841729   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.841729   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.841729   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.841729   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.841729   13472 round_trippers.go:580]     Audit-Id: b948fb0a-3571-445f-bff1-b512610298e0
	I0501 03:52:41.841729   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.842613   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-289800","namespace":"kube-system","uid":"a1b99f2b-8aed-4037-956a-13bde4551a72","resourceVersion":"311","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.209.152:8443","kubernetes.io/config.hash":"fc7b6f2a7c826774b66af910f598e965","kubernetes.io/config.mirror":"fc7b6f2a7c826774b66af910f598e965","kubernetes.io/config.seen":"2024-05-01T03:52:15.688762545Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0501 03:52:41.843239   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:41.843239   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.843239   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.843239   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.846211   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:52:41.846211   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.846211   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.846211   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.846211   13472 round_trippers.go:580]     Audit-Id: 5e27d05c-b9fd-4883-92da-8d8d02c3a9d9
	I0501 03:52:41.846211   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.846211   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.846211   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.846660   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.846831   13472 pod_ready.go:92] pod "kube-apiserver-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:41.846831   13472 pod_ready.go:81] duration metric: took 8.003ms for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.846831   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.847378   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 03:52:41.847378   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.847378   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.847378   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.850730   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:41.850730   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.850966   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.850966   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.850966   13472 round_trippers.go:580]     Audit-Id: 63978684-21b1-479f-a713-b5fd0eda074f
	I0501 03:52:41.850966   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.850966   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.850966   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.851771   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-289800","namespace":"kube-system","uid":"fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318","resourceVersion":"283","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.mirror":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.seen":"2024-05-01T03:52:15.688763845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0501 03:52:41.851995   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:41.851995   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:41.851995   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:41.851995   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:41.853567   13472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 03:52:41.853567   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:41.853567   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:41.853567   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:41 GMT
	I0501 03:52:41.854423   13472 round_trippers.go:580]     Audit-Id: 2b44a1aa-4b77-4587-8ab7-59cb67425b61
	I0501 03:52:41.854423   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:41.854423   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:41.854423   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:41.854423   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:41.855195   13472 pod_ready.go:92] pod "kube-controller-manager-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:41.855235   13472 pod_ready.go:81] duration metric: took 8.4044ms for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:41.855286   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:42.006565   13472 request.go:629] Waited for 150.9428ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 03:52:42.006677   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 03:52:42.006677   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:42.006677   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:42.006677   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:42.011145   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:42.011145   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:42.011145   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:42.011145   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:42.011145   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:42 GMT
	I0501 03:52:42.011145   13472 round_trippers.go:580]     Audit-Id: ff835f91-2379-46ea-8dd7-9fc665c99b25
	I0501 03:52:42.011145   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:42.011145   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:42.011855   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bp9zx","generateName":"kube-proxy-","namespace":"kube-system","uid":"aba82e50-b8f8-40b4-b08a-6d045314d6b6","resourceVersion":"356","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0501 03:52:42.207153   13472 request.go:629] Waited for 194.3061ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:42.207547   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:42.207547   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:42.207547   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:42.207547   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:42.209968   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:52:42.209968   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:42.209968   13472 round_trippers.go:580]     Audit-Id: 5b0c7e1f-29a2-47ed-9098-2b9524fb480a
	I0501 03:52:42.209968   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:42.209968   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:42.209968   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:42.209968   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:42.209968   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:42 GMT
	I0501 03:52:42.211107   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:42.211778   13472 pod_ready.go:92] pod "kube-proxy-bp9zx" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:42.211872   13472 pod_ready.go:81] duration metric: took 356.5841ms for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:42.211872   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:42.411669   13472 request.go:629] Waited for 199.4919ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 03:52:42.411852   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 03:52:42.411852   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:42.411852   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:42.411852   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:42.415827   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:42.415827   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:42.415827   13472 round_trippers.go:580]     Audit-Id: 0a868c64-3718-45da-a675-91df97dc105b
	I0501 03:52:42.415827   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:42.415827   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:42.415827   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:42.416469   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:42.416469   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:42 GMT
	I0501 03:52:42.416646   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-289800","namespace":"kube-system","uid":"c7518f03-993b-432f-b742-8805dd2167a7","resourceVersion":"280","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.mirror":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.seen":"2024-05-01T03:52:15.688771544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0501 03:52:42.614117   13472 request.go:629] Waited for 196.7039ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:42.614486   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:52:42.614520   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:42.614555   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:42.614587   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:42.618020   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:42.618020   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:42.618020   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:42.618020   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:42.618020   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:42.618020   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:42.618020   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:42 GMT
	I0501 03:52:42.618020   13472 round_trippers.go:580]     Audit-Id: 87350a68-feb0-4ab2-b73c-8280b201f6e6
	I0501 03:52:42.619027   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4784 chars]
	I0501 03:52:42.619470   13472 pod_ready.go:92] pod "kube-scheduler-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:52:42.619470   13472 pod_ready.go:81] duration metric: took 407.5947ms for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:52:42.619470   13472 pod_ready.go:38] duration metric: took 2.8443953s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:52:42.619577   13472 api_server.go:52] waiting for apiserver process to appear ...
	I0501 03:52:42.635779   13472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 03:52:42.668045   13472 command_runner.go:130] > 2011
	I0501 03:52:42.668499   13472 api_server.go:72] duration metric: took 13.76063s to wait for apiserver process to appear ...
	I0501 03:52:42.668581   13472 api_server.go:88] waiting for apiserver healthz status ...
	I0501 03:52:42.668581   13472 api_server.go:253] Checking apiserver healthz at https://172.28.209.152:8443/healthz ...
	I0501 03:52:42.676065   13472 api_server.go:279] https://172.28.209.152:8443/healthz returned 200:
	ok
	I0501 03:52:42.676164   13472 round_trippers.go:463] GET https://172.28.209.152:8443/version
	I0501 03:52:42.676245   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:42.676245   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:42.676245   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:42.677045   13472 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0501 03:52:42.677045   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:42.677045   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:42.677045   13472 round_trippers.go:580]     Content-Length: 263
	I0501 03:52:42.677045   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:42 GMT
	I0501 03:52:42.677045   13472 round_trippers.go:580]     Audit-Id: 7174c826-f675-45ec-a5e7-7f0c59a5cf8b
	I0501 03:52:42.677977   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:42.677977   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:42.677977   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:42.677977   13472 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 03:52:42.678102   13472 api_server.go:141] control plane version: v1.30.0
	I0501 03:52:42.678169   13472 api_server.go:131] duration metric: took 9.5887ms to wait for apiserver health ...
	I0501 03:52:42.678169   13472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 03:52:42.816481   13472 request.go:629] Waited for 138.0754ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods
	I0501 03:52:42.816668   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods
	I0501 03:52:42.816668   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:42.816668   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:42.816668   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:42.821085   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:52:42.822076   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:42.822076   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:42.822076   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:42 GMT
	I0501 03:52:42.822076   13472 round_trippers.go:580]     Audit-Id: 7ab601db-46a4-4a20-8521-999419653dcc
	I0501 03:52:42.822076   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:42.822076   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:42.822076   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:42.823150   13472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"407","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 64072 chars]
	I0501 03:52:42.826358   13472 system_pods.go:59] 9 kube-system pods found
	I0501 03:52:42.826416   13472 system_pods.go:61] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "etcd-multinode-289800" [96a8cf0b-45bc-4636-9264-a0da579b5fa8] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "kube-apiserver-multinode-289800" [a1b99f2b-8aed-4037-956a-13bde4551a72] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running
	I0501 03:52:42.826416   13472 system_pods.go:61] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running
	I0501 03:52:42.826416   13472 system_pods.go:74] duration metric: took 148.2456ms to wait for pod list to return data ...
	I0501 03:52:42.826529   13472 default_sa.go:34] waiting for default service account to be created ...
	I0501 03:52:43.009947   13472 request.go:629] Waited for 183.1085ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/default/serviceaccounts
	I0501 03:52:43.010148   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/default/serviceaccounts
	I0501 03:52:43.010148   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:43.010219   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:43.010219   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:43.014082   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:52:43.014082   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:43.014082   13472 round_trippers.go:580]     Audit-Id: 50f5c490-d411-4668-a083-d6d0e42c77cb
	I0501 03:52:43.014082   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:43.014082   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:43.014082   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:43.015088   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:43.015088   13472 round_trippers.go:580]     Content-Length: 261
	I0501 03:52:43.015088   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:43 GMT
	I0501 03:52:43.015121   13472 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7dbf8d0-35c5-4373-a233-f0386cee7e97","resourceVersion":"307","creationTimestamp":"2024-05-01T03:52:28Z"}}]}
	I0501 03:52:43.015433   13472 default_sa.go:45] found service account: "default"
	I0501 03:52:43.015433   13472 default_sa.go:55] duration metric: took 188.9026ms for default service account to be created ...
	I0501 03:52:43.015433   13472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 03:52:43.213477   13472 request.go:629] Waited for 197.8981ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods
	I0501 03:52:43.213790   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods
	I0501 03:52:43.213790   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:43.213790   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:43.213790   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:43.219292   13472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 03:52:43.220238   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:43.220280   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:43 GMT
	I0501 03:52:43.220280   13472 round_trippers.go:580]     Audit-Id: f4cc3cd8-ec53-4f86-8b0c-cf92eb66ffcd
	I0501 03:52:43.220280   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:43.220280   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:43.220354   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:43.220354   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:43.222301   13472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"407","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 64072 chars]
	I0501 03:52:43.225331   13472 system_pods.go:86] 9 kube-system pods found
	I0501 03:52:43.225331   13472 system_pods.go:89] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "etcd-multinode-289800" [96a8cf0b-45bc-4636-9264-a0da579b5fa8] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "kube-apiserver-multinode-289800" [a1b99f2b-8aed-4037-956a-13bde4551a72] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running
	I0501 03:52:43.225331   13472 system_pods.go:89] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running
	I0501 03:52:43.225331   13472 system_pods.go:126] duration metric: took 209.8966ms to wait for k8s-apps to be running ...
	I0501 03:52:43.225331   13472 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:52:43.238805   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:52:43.265719   13472 system_svc.go:56] duration metric: took 39.4681ms WaitForService to wait for kubelet
	I0501 03:52:43.265763   13472 kubeadm.go:576] duration metric: took 14.3578893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:52:43.265794   13472 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:52:43.418501   13472 request.go:629] Waited for 152.3073ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes
	I0501 03:52:43.418593   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes
	I0501 03:52:43.418593   13472 round_trippers.go:469] Request Headers:
	I0501 03:52:43.418593   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:52:43.418593   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:52:43.425605   13472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 03:52:43.425605   13472 round_trippers.go:577] Response Headers:
	I0501 03:52:43.425605   13472 round_trippers.go:580]     Audit-Id: d8dd1ee0-a532-4035-aa1e-166d1bef2e59
	I0501 03:52:43.425605   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:52:43.425605   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:52:43.425605   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:52:43.425605   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:52:43.425605   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:52:43 GMT
	I0501 03:52:43.425605   13472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"378","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4837 chars]
	I0501 03:52:43.426264   13472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:52:43.426264   13472 node_conditions.go:123] node cpu capacity is 2
	I0501 03:52:43.426264   13472 node_conditions.go:105] duration metric: took 160.4685ms to run NodePressure ...
	I0501 03:52:43.426264   13472 start.go:240] waiting for startup goroutines ...
	I0501 03:52:43.426264   13472 start.go:245] waiting for cluster config update ...
	I0501 03:52:43.426264   13472 start.go:254] writing updated cluster config ...
	I0501 03:52:43.432211   13472 out.go:177] 
	I0501 03:52:43.443606   13472 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:52:43.443606   13472 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 03:52:43.448757   13472 out.go:177] * Starting "multinode-289800-m02" worker node in "multinode-289800" cluster
	I0501 03:52:43.452658   13472 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 03:52:43.452658   13472 cache.go:56] Caching tarball of preloaded images
	I0501 03:52:43.453341   13472 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 03:52:43.453502   13472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 03:52:43.453542   13472 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 03:52:43.459625   13472 start.go:360] acquireMachinesLock for multinode-289800-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 03:52:43.459625   13472 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-289800-m02"
	I0501 03:52:43.460252   13472 start.go:93] Provisioning new machine with config: &{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0501 03:52:43.460252   13472 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0501 03:52:43.463195   13472 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0501 03:52:43.463195   13472 start.go:159] libmachine.API.Create for "multinode-289800" (driver="hyperv")
	I0501 03:52:43.463195   13472 client.go:168] LocalClient.Create starting
	I0501 03:52:43.463886   13472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 03:52:43.463886   13472 main.go:141] libmachine: Decoding PEM data...
	I0501 03:52:43.463886   13472 main.go:141] libmachine: Parsing certificate...
	I0501 03:52:43.464594   13472 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 03:52:43.464594   13472 main.go:141] libmachine: Decoding PEM data...
	I0501 03:52:43.464594   13472 main.go:141] libmachine: Parsing certificate...
	I0501 03:52:43.464594   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 03:52:45.443767   13472 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 03:52:45.443767   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:45.443916   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 03:52:47.256770   13472 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 03:52:47.256947   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:47.257020   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 03:52:48.829037   13472 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 03:52:48.829037   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:48.829398   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 03:52:52.641370   13472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 03:52:52.641370   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:52.644029   13472 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 03:52:53.214839   13472 main.go:141] libmachine: Creating SSH key...
	I0501 03:52:53.800208   13472 main.go:141] libmachine: Creating VM...
	I0501 03:52:53.800208   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 03:52:56.692766   13472 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 03:52:56.692766   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:56.692766   13472 main.go:141] libmachine: Using switch "Default Switch"
	I0501 03:52:56.692766   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 03:52:58.534736   13472 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 03:52:58.534736   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:52:58.534736   13472 main.go:141] libmachine: Creating VHD
	I0501 03:52:58.534736   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 03:53:02.245492   13472 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AC0791CA-8631-440F-9CF9-A625A0CC73B6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 03:53:02.245492   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:02.245641   13472 main.go:141] libmachine: Writing magic tar header
	I0501 03:53:02.245641   13472 main.go:141] libmachine: Writing SSH key tar header
	I0501 03:53:02.255182   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 03:53:05.388524   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:05.388524   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:05.388715   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\disk.vhd' -SizeBytes 20000MB
	I0501 03:53:07.967572   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:07.968543   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:07.968543   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-289800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0501 03:53:11.677898   13472 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-289800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 03:53:11.678661   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:11.678661   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-289800-m02 -DynamicMemoryEnabled $false
	I0501 03:53:13.928672   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:13.929077   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:13.929077   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-289800-m02 -Count 2
	I0501 03:53:16.132627   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:16.132726   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:16.132726   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-289800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\boot2docker.iso'
	I0501 03:53:18.778368   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:18.779460   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:18.779460   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-289800-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\disk.vhd'
	I0501 03:53:21.546078   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:21.546078   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:21.546078   13472 main.go:141] libmachine: Starting VM...
	I0501 03:53:21.546753   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-289800-m02
	I0501 03:53:24.632187   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:24.632187   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:24.632187   13472 main.go:141] libmachine: Waiting for host to start...
	I0501 03:53:24.633003   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:26.863176   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:26.863176   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:26.863176   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:53:29.403998   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:29.403998   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:30.410037   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:32.561642   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:32.562602   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:32.562602   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:53:35.099577   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:35.099577   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:36.103630   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:38.288893   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:38.288931   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:38.289099   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:53:40.895990   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:40.895990   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:41.911220   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:44.107775   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:44.107775   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:44.107775   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:53:46.611070   13472 main.go:141] libmachine: [stdout =====>] : 
	I0501 03:53:46.611070   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:47.622389   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:49.898934   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:49.898934   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:49.898934   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:53:52.719287   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:53:52.719287   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:52.719548   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:54.804221   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:54.805278   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:54.805278   13472 machine.go:94] provisionDockerMachine start ...
	I0501 03:53:54.805278   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:53:56.942628   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:53:56.942628   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:56.942628   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:53:59.472090   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:53:59.472090   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:53:59.481488   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:53:59.491736   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:53:59.491736   13472 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 03:53:59.627922   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 03:53:59.627922   13472 buildroot.go:166] provisioning hostname "multinode-289800-m02"
	I0501 03:53:59.628027   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:01.753821   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:01.753821   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:01.753821   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:04.351263   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:04.351263   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:04.357004   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:04.357849   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:04.357849   13472 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-289800-m02 && echo "multinode-289800-m02" | sudo tee /etc/hostname
	I0501 03:54:04.518063   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-289800-m02
	
	I0501 03:54:04.518195   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:06.619402   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:06.619402   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:06.619953   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:09.236555   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:09.236748   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:09.244333   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:09.244554   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:09.244554   13472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-289800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-289800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-289800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 03:54:09.399452   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 03:54:09.399561   13472 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 03:54:09.399561   13472 buildroot.go:174] setting up certificates
	I0501 03:54:09.399695   13472 provision.go:84] configureAuth start
	I0501 03:54:09.399695   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:11.524300   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:11.524300   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:11.524300   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:14.047342   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:14.047342   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:14.047342   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:16.162013   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:16.162013   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:16.162013   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:18.751882   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:18.751882   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:18.751882   13472 provision.go:143] copyHostCerts
	I0501 03:54:18.752911   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 03:54:18.753215   13472 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 03:54:18.753215   13472 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 03:54:18.753752   13472 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 03:54:18.754666   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 03:54:18.755232   13472 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 03:54:18.755408   13472 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 03:54:18.755440   13472 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 03:54:18.756717   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 03:54:18.756945   13472 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 03:54:18.756945   13472 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 03:54:18.758331   13472 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 03:54:18.759843   13472 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-289800-m02 san=[127.0.0.1 172.28.219.162 localhost minikube multinode-289800-m02]
	I0501 03:54:18.942289   13472 provision.go:177] copyRemoteCerts
	I0501 03:54:18.957118   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 03:54:18.957201   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:21.074571   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:21.075346   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:21.075346   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:23.679032   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:23.679032   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:23.680637   13472 sshutil.go:53] new ssh client: &{IP:172.28.219.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 03:54:23.788343   13472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8311891s)
	I0501 03:54:23.788427   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 03:54:23.788962   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 03:54:23.840561   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 03:54:23.840561   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0501 03:54:23.891948   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 03:54:23.892961   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 03:54:23.949005   13472 provision.go:87] duration metric: took 14.5492007s to configureAuth
	I0501 03:54:23.949005   13472 buildroot.go:189] setting minikube options for container-runtime
	I0501 03:54:23.949577   13472 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:54:23.949734   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:26.072199   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:26.072424   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:26.072530   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:28.624370   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:28.624370   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:28.630905   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:28.631541   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:28.631673   13472 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 03:54:28.770361   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 03:54:28.770361   13472 buildroot.go:70] root file system type: tmpfs
	I0501 03:54:28.770361   13472 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 03:54:28.770361   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:30.892101   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:30.892101   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:30.892101   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:33.503293   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:33.503343   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:33.509184   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:33.509511   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:33.509511   13472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.209.152"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 03:54:33.670746   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.209.152
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 03:54:33.671988   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:35.781787   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:35.781787   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:35.782784   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:38.322545   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:38.322545   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:38.332840   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:38.333469   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:38.333469   13472 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 03:54:40.583961   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 03:54:40.584005   13472 machine.go:97] duration metric: took 45.7783835s to provisionDockerMachine
	I0501 03:54:40.584005   13472 client.go:171] duration metric: took 1m57.1199312s to LocalClient.Create
	I0501 03:54:40.584005   13472 start.go:167] duration metric: took 1m57.1199312s to libmachine.API.Create "multinode-289800"
	I0501 03:54:40.584005   13472 start.go:293] postStartSetup for "multinode-289800-m02" (driver="hyperv")
	I0501 03:54:40.584120   13472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 03:54:40.598037   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 03:54:40.598037   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:42.694931   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:42.695600   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:42.695600   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:45.250073   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:45.250073   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:45.251059   13472 sshutil.go:53] new ssh client: &{IP:172.28.219.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 03:54:45.366884   13472 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7688118s)
	I0501 03:54:45.382188   13472 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 03:54:45.388352   13472 command_runner.go:130] > NAME=Buildroot
	I0501 03:54:45.388704   13472 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 03:54:45.388704   13472 command_runner.go:130] > ID=buildroot
	I0501 03:54:45.388704   13472 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 03:54:45.388704   13472 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 03:54:45.388819   13472 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 03:54:45.388870   13472 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 03:54:45.389267   13472 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 03:54:45.390651   13472 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 03:54:45.390651   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 03:54:45.404066   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 03:54:45.423143   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 03:54:45.475252   13472 start.go:296] duration metric: took 4.8910949s for postStartSetup
	I0501 03:54:45.478250   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:47.610374   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:47.611372   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:47.611673   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:50.237899   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:50.237899   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:50.238301   13472 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 03:54:50.240671   13472 start.go:128] duration metric: took 2m6.7794686s to createHost
	I0501 03:54:50.240671   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:52.483039   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:52.483039   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:52.483315   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:55.049029   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:55.050041   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:55.056618   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:55.057788   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:55.057788   13472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 03:54:55.205202   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714535695.201611572
	
	I0501 03:54:55.205202   13472 fix.go:216] guest clock: 1714535695.201611572
	I0501 03:54:55.205202   13472 fix.go:229] Guest: 2024-05-01 03:54:55.201611572 +0000 UTC Remote: 2024-05-01 03:54:50.2406717 +0000 UTC m=+342.529244601 (delta=4.960939872s)
	I0501 03:54:55.205202   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:54:57.329377   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:54:57.329377   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:57.329619   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:54:59.928408   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:54:59.928408   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:54:59.934199   13472 main.go:141] libmachine: Using SSH client type: native
	I0501 03:54:59.934814   13472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.219.162 22 <nil> <nil>}
	I0501 03:54:59.934814   13472 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714535695
	I0501 03:55:00.095252   13472 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 03:54:55 UTC 2024
	
	I0501 03:55:00.095332   13472 fix.go:236] clock set: Wed May  1 03:54:55 UTC 2024
	 (err=<nil>)
	I0501 03:55:00.095332   13472 start.go:83] releasing machines lock for "multinode-289800-m02", held for 2m16.634682s
	I0501 03:55:00.095520   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:55:02.239393   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:55:02.239465   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:02.239570   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:55:04.766399   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:55:04.767212   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:04.770190   13472 out.go:177] * Found network options:
	I0501 03:55:04.772968   13472 out.go:177]   - NO_PROXY=172.28.209.152
	W0501 03:55:04.778434   13472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 03:55:04.781058   13472 out.go:177]   - NO_PROXY=172.28.209.152
	W0501 03:55:04.785364   13472 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 03:55:04.787049   13472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 03:55:04.790043   13472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 03:55:04.790288   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:55:04.802222   13472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 03:55:04.803219   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 03:55:06.956080   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:55:06.956147   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:06.956147   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:55:06.986862   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:55:06.986862   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:06.987561   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 03:55:09.659715   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:55:09.660481   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:09.660905   13472 sshutil.go:53] new ssh client: &{IP:172.28.219.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 03:55:09.679793   13472 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 03:55:09.679917   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:09.680435   13472 sshutil.go:53] new ssh client: &{IP:172.28.219.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 03:55:09.897573   13472 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 03:55:09.897648   13472 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0501 03:55:09.897648   13472 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0943908s)
	I0501 03:55:09.897648   13472 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1074348s)
	W0501 03:55:09.897648   13472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 03:55:09.912312   13472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 03:55:09.944260   13472 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0501 03:55:09.944260   13472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 03:55:09.944260   13472 start.go:494] detecting cgroup driver to use...
	I0501 03:55:09.944945   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:55:09.981974   13472 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 03:55:09.991986   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 03:55:10.034063   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 03:55:10.060374   13472 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 03:55:10.073237   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 03:55:10.112009   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:55:10.146849   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 03:55:10.180445   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 03:55:10.216704   13472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 03:55:10.249173   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 03:55:10.282085   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 03:55:10.317004   13472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 03:55:10.350850   13472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 03:55:10.370254   13472 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 03:55:10.387818   13472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 03:55:10.422830   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:55:10.655770   13472 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 03:55:10.691914   13472 start.go:494] detecting cgroup driver to use...
	I0501 03:55:10.703884   13472 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 03:55:10.734797   13472 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 03:55:10.734797   13472 command_runner.go:130] > [Unit]
	I0501 03:55:10.734797   13472 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 03:55:10.734797   13472 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 03:55:10.734797   13472 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 03:55:10.734797   13472 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 03:55:10.734797   13472 command_runner.go:130] > StartLimitBurst=3
	I0501 03:55:10.734797   13472 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 03:55:10.734797   13472 command_runner.go:130] > [Service]
	I0501 03:55:10.734797   13472 command_runner.go:130] > Type=notify
	I0501 03:55:10.734797   13472 command_runner.go:130] > Restart=on-failure
	I0501 03:55:10.734797   13472 command_runner.go:130] > Environment=NO_PROXY=172.28.209.152
	I0501 03:55:10.734797   13472 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 03:55:10.734797   13472 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 03:55:10.734797   13472 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 03:55:10.734797   13472 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 03:55:10.734797   13472 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 03:55:10.734797   13472 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 03:55:10.734797   13472 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 03:55:10.734797   13472 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 03:55:10.734797   13472 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 03:55:10.734797   13472 command_runner.go:130] > ExecStart=
	I0501 03:55:10.734797   13472 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 03:55:10.734797   13472 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 03:55:10.734797   13472 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 03:55:10.734797   13472 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 03:55:10.734797   13472 command_runner.go:130] > LimitNOFILE=infinity
	I0501 03:55:10.734797   13472 command_runner.go:130] > LimitNPROC=infinity
	I0501 03:55:10.734797   13472 command_runner.go:130] > LimitCORE=infinity
	I0501 03:55:10.734797   13472 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 03:55:10.734797   13472 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 03:55:10.734797   13472 command_runner.go:130] > TasksMax=infinity
	I0501 03:55:10.734797   13472 command_runner.go:130] > TimeoutStartSec=0
	I0501 03:55:10.734797   13472 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 03:55:10.734797   13472 command_runner.go:130] > Delegate=yes
	I0501 03:55:10.734797   13472 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 03:55:10.734797   13472 command_runner.go:130] > KillMode=process
	I0501 03:55:10.734797   13472 command_runner.go:130] > [Install]
	I0501 03:55:10.735351   13472 command_runner.go:130] > WantedBy=multi-user.target
	I0501 03:55:10.748412   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:55:10.783417   13472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 03:55:10.831768   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 03:55:10.869772   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:55:10.906766   13472 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 03:55:10.971241   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 03:55:10.995808   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 03:55:11.034515   13472 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 03:55:11.048088   13472 ssh_runner.go:195] Run: which cri-dockerd
	I0501 03:55:11.054616   13472 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 03:55:11.068163   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 03:55:11.086394   13472 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 03:55:11.133823   13472 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 03:55:11.344485   13472 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 03:55:11.544887   13472 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 03:55:11.544958   13472 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 03:55:11.590370   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:55:11.791410   13472 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 03:55:14.353887   13472 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5624583s)
	I0501 03:55:14.363987   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 03:55:14.408544   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 03:55:14.450276   13472 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 03:55:14.667889   13472 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 03:55:14.894100   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:55:15.113093   13472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 03:55:15.160170   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 03:55:15.198663   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:55:15.453528   13472 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 03:55:15.577649   13472 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 03:55:15.591070   13472 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 03:55:15.600856   13472 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0501 03:55:15.600856   13472 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 03:55:15.600856   13472 command_runner.go:130] > Device: 0,22	Inode: 900         Links: 1
	I0501 03:55:15.600856   13472 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0501 03:55:15.600856   13472 command_runner.go:130] > Access: 2024-05-01 03:55:15.478651173 +0000
	I0501 03:55:15.600856   13472 command_runner.go:130] > Modify: 2024-05-01 03:55:15.478651173 +0000
	I0501 03:55:15.600856   13472 command_runner.go:130] > Change: 2024-05-01 03:55:15.486651196 +0000
	I0501 03:55:15.600856   13472 command_runner.go:130] >  Birth: -
	I0501 03:55:15.600856   13472 start.go:562] Will wait 60s for crictl version
	I0501 03:55:15.614178   13472 ssh_runner.go:195] Run: which crictl
	I0501 03:55:15.622435   13472 command_runner.go:130] > /usr/bin/crictl
	I0501 03:55:15.636176   13472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 03:55:15.691667   13472 command_runner.go:130] > Version:  0.1.0
	I0501 03:55:15.691667   13472 command_runner.go:130] > RuntimeName:  docker
	I0501 03:55:15.691667   13472 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0501 03:55:15.691667   13472 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 03:55:15.691667   13472 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 03:55:15.701672   13472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 03:55:15.735270   13472 command_runner.go:130] > 26.0.2
	I0501 03:55:15.746303   13472 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 03:55:15.778535   13472 command_runner.go:130] > 26.0.2
	I0501 03:55:15.784426   13472 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 03:55:15.787987   13472 out.go:177]   - env NO_PROXY=172.28.209.152
	I0501 03:55:15.789796   13472 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 03:55:15.794926   13472 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 03:55:15.794926   13472 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 03:55:15.794926   13472 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 03:55:15.794926   13472 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 03:55:15.797786   13472 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 03:55:15.797786   13472 ip.go:210] interface addr: 172.28.208.1/20
	I0501 03:55:15.809804   13472 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 03:55:15.816782   13472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:55:15.842114   13472 mustload.go:65] Loading cluster: multinode-289800
	I0501 03:55:15.842566   13472 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:55:15.843348   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:55:17.984240   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:55:17.984240   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:17.984240   13472 host.go:66] Checking if "multinode-289800" exists ...
	I0501 03:55:17.985209   13472 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800 for IP: 172.28.219.162
	I0501 03:55:17.985209   13472 certs.go:194] generating shared ca certs ...
	I0501 03:55:17.985274   13472 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 03:55:17.985928   13472 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 03:55:17.986293   13472 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 03:55:17.986403   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 03:55:17.986853   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 03:55:17.986853   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 03:55:17.987133   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 03:55:17.987804   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 03:55:17.988115   13472 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 03:55:17.988115   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 03:55:17.988115   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 03:55:17.988702   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 03:55:17.988895   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 03:55:17.989625   13472 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 03:55:17.989952   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 03:55:17.990082   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:55:17.990082   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 03:55:17.990082   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 03:55:18.044523   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 03:55:18.091729   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 03:55:18.138308   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 03:55:18.188800   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 03:55:18.238623   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 03:55:18.287233   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 03:55:18.351196   13472 ssh_runner.go:195] Run: openssl version
	I0501 03:55:18.360885   13472 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 03:55:18.375012   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 03:55:18.410519   13472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 03:55:18.419657   13472 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 03:55:18.419744   13472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 03:55:18.433270   13472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 03:55:18.442231   13472 command_runner.go:130] > 51391683
	I0501 03:55:18.456387   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 03:55:18.492099   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 03:55:18.531510   13472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 03:55:18.538790   13472 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 03:55:18.538890   13472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 03:55:18.552045   13472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 03:55:18.563580   13472 command_runner.go:130] > 3ec20f2e
	I0501 03:55:18.575963   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 03:55:18.610281   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 03:55:18.645472   13472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:55:18.651474   13472 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:55:18.652545   13472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:55:18.665805   13472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 03:55:18.674849   13472 command_runner.go:130] > b5213941
	I0501 03:55:18.687804   13472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 03:55:18.723170   13472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 03:55:18.729091   13472 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:55:18.729091   13472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0501 03:55:18.729980   13472 kubeadm.go:928] updating node {m02 172.28.219.162 8443 v1.30.0 docker false true} ...
	I0501 03:55:18.729980   13472 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-289800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.219.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 03:55:18.742970   13472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 03:55:18.762573   13472 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0501 03:55:18.763596   13472 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0501 03:55:18.780107   13472 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0501 03:55:18.800861   13472 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0501 03:55:18.800861   13472 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0501 03:55:18.801386   13472 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0501 03:55:18.801538   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 03:55:18.801735   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 03:55:18.820759   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:55:18.850386   13472 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 03:55:18.881341   13472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0501 03:55:18.888350   13472 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 03:55:18.889166   13472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0501 03:55:18.889166   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0501 03:55:18.935289   13472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0501 03:55:18.936364   13472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0501 03:55:19.019039   13472 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 03:55:19.019039   13472 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 03:55:19.019039   13472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0501 03:55:19.019779   13472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0501 03:55:19.019906   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0501 03:55:19.019906   13472 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0501 03:55:20.348774   13472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0501 03:55:20.368818   13472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0501 03:55:20.408881   13472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 03:55:20.467705   13472 ssh_runner.go:195] Run: grep 172.28.209.152	control-plane.minikube.internal$ /etc/hosts
	I0501 03:55:20.473714   13472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.209.152	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 03:55:20.511987   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:55:20.756595   13472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:55:20.794721   13472 host.go:66] Checking if "multinode-289800" exists ...
	I0501 03:55:20.795193   13472 start.go:316] joinCluster: &{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 03:55:20.795193   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0501 03:55:20.795729   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 03:55:22.957136   13472 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 03:55:22.957283   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:22.957428   13472 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 03:55:25.490559   13472 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 03:55:25.491437   13472 main.go:141] libmachine: [stderr =====>] : 
	I0501 03:55:25.491437   13472 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 03:55:25.692604   13472 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vo7wpk.fli163sotegcl8d2 --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 03:55:25.692604   13472 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8973744s)
	I0501 03:55:25.692604   13472 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0501 03:55:25.693646   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vo7wpk.fli163sotegcl8d2 --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-289800-m02"
	I0501 03:55:25.919650   13472 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0501 03:55:27.288199   13472 command_runner.go:130] > [preflight] Running pre-flight checks
	I0501 03:55:27.288199   13472 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0501 03:55:27.288199   13472 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0501 03:55:27.288199   13472 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 03:55:27.288199   13472 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 03:55:27.288199   13472 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0501 03:55:27.288199   13472 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 03:55:27.288199   13472 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002363163s
	I0501 03:55:27.288199   13472 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0501 03:55:27.288199   13472 command_runner.go:130] > This node has joined the cluster:
	I0501 03:55:27.288199   13472 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0501 03:55:27.288199   13472 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0501 03:55:27.288199   13472 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0501 03:55:27.288199   13472 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vo7wpk.fli163sotegcl8d2 --discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-289800-m02": (1.5945407s)
	I0501 03:55:27.288199   13472 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0501 03:55:27.533972   13472 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0501 03:55:27.766330   13472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-289800-m02 minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=multinode-289800 minikube.k8s.io/primary=false
	I0501 03:55:27.900810   13472 command_runner.go:130] > node/multinode-289800-m02 labeled
	I0501 03:55:27.900810   13472 start.go:318] duration metric: took 7.1055633s to joinCluster
	I0501 03:55:27.900810   13472 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0501 03:55:27.903814   13472 out.go:177] * Verifying Kubernetes components...
	I0501 03:55:27.901829   13472 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 03:55:27.921413   13472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 03:55:28.164166   13472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 03:55:28.190168   13472 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 03:55:28.191209   13472 kapi.go:59] client config for multinode-289800: &rest.Config{Host:"https://172.28.209.152:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 03:55:28.192177   13472 node_ready.go:35] waiting up to 6m0s for node "multinode-289800-m02" to be "Ready" ...
	I0501 03:55:28.192177   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:28.192177   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:28.192177   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:28.192177   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:28.211227   13472 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0501 03:55:28.211892   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:28.211892   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:28.211892   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:28.211892   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:28.211892   13472 round_trippers.go:580]     Content-Length: 3921
	I0501 03:55:28.212029   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:28 GMT
	I0501 03:55:28.212029   13472 round_trippers.go:580]     Audit-Id: 55a60245-176d-4f56-a8ac-35c0d7ab39a3
	I0501 03:55:28.212029   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:28.212097   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"573","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0501 03:55:28.694608   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:28.694608   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:28.694608   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:28.694608   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:28.701122   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:28.701122   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:28.701122   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:28 GMT
	I0501 03:55:28.701122   13472 round_trippers.go:580]     Audit-Id: 7a5e866c-6246-4bb1-ac10-a71257fbf90e
	I0501 03:55:28.701122   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:28.701122   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:28.701122   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:28.701122   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:28.701122   13472 round_trippers.go:580]     Content-Length: 3921
	I0501 03:55:28.701122   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"573","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0501 03:55:29.194586   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:29.194586   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:29.194586   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:29.194586   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:29.198090   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:29.198090   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:29.198404   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:29.198404   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:29.198404   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:29.198404   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:29.198404   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:29 GMT
	I0501 03:55:29.198404   13472 round_trippers.go:580]     Audit-Id: 9ced3012-44f6-49fa-a4ea-c60c04e573f4
	I0501 03:55:29.198404   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:29.198566   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:29.697808   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:29.697869   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:29.697869   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:29.697869   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:29.701310   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:29.701310   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:29.701310   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:29.701310   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:29.701310   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:29.701310   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:29.701310   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:29.701310   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:29 GMT
	I0501 03:55:29.701310   13472 round_trippers.go:580]     Audit-Id: 353c2f04-74ea-4298-8524-4a633b0f9ecd
	I0501 03:55:29.702396   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:30.195154   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:30.195282   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:30.195282   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:30.195282   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:30.199665   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:30.200264   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:30.200264   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:30.200264   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:30.200264   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:30.200264   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:30.200264   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:30 GMT
	I0501 03:55:30.200264   13472 round_trippers.go:580]     Audit-Id: b2e3de75-1d2f-4d50-aeb9-8024ec4653ae
	I0501 03:55:30.200264   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:30.200501   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:30.201097   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:30.695771   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:30.695771   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:30.695771   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:30.695771   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:30.702454   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:30.702649   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:30.702649   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:30.702649   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:30.702649   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:30.702649   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:30.702649   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:30 GMT
	I0501 03:55:30.702649   13472 round_trippers.go:580]     Audit-Id: 3491bd76-38d1-43bb-8341-d0284ee08ca7
	I0501 03:55:30.702649   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:30.702947   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:31.196130   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:31.196203   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:31.196203   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:31.196203   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:31.202847   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:31.202847   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:31.202847   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:31.202847   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:31.202847   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:31 GMT
	I0501 03:55:31.202847   13472 round_trippers.go:580]     Audit-Id: 15c11d9e-da79-4b85-9767-33178821ff3c
	I0501 03:55:31.202847   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:31.202847   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:31.202847   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:31.203859   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:31.699651   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:31.699745   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:31.699745   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:31.699745   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:31.704178   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:31.704178   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:31.704178   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:31.704178   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:31.704336   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:31.704336   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:31 GMT
	I0501 03:55:31.704336   13472 round_trippers.go:580]     Audit-Id: 96952334-00a6-4421-a987-dd1393b2f620
	I0501 03:55:31.704336   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:31.704336   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:31.704336   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:32.195233   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:32.195379   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:32.195379   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:32.195379   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:32.201285   13472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 03:55:32.201285   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:32.201285   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:32 GMT
	I0501 03:55:32.201285   13472 round_trippers.go:580]     Audit-Id: ff54fa83-1362-46bb-bec0-8eb57197a37a
	I0501 03:55:32.201285   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:32.201285   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:32.202246   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:32.202246   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:32.202246   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:32.202376   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:32.202735   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:32.697075   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:32.697075   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:32.697075   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:32.697213   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:32.700541   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:32.701538   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:32.701590   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:32.701590   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:32.701590   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:32.701590   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:32.701590   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:32 GMT
	I0501 03:55:32.701590   13472 round_trippers.go:580]     Audit-Id: cef866a1-01e3-4505-9a9c-4d4b7ad87ba0
	I0501 03:55:32.701590   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:32.701666   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:33.201381   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:33.201472   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:33.201472   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:33.201472   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:33.204896   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:33.204896   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:33.204896   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:33.204896   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:33.204896   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:33.204896   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:33.204896   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:33.204896   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:33 GMT
	I0501 03:55:33.204896   13472 round_trippers.go:580]     Audit-Id: 29e93d04-e6c2-4f13-be85-bc616fe73fd9
	I0501 03:55:33.205668   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:33.701437   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:33.701437   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:33.701437   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:33.701437   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:33.705377   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:33.705377   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:33.705377   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:33.705377   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:33.705377   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:33.705377   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:33.705377   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:33.705459   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:33 GMT
	I0501 03:55:33.705459   13472 round_trippers.go:580]     Audit-Id: b2cf3754-4299-48d8-93d8-84fe9614f786
	I0501 03:55:33.705565   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:34.206848   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:34.206906   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:34.206922   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:34.206922   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:34.210535   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:34.210535   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:34.211442   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:34 GMT
	I0501 03:55:34.211442   13472 round_trippers.go:580]     Audit-Id: 906448fb-1b29-486a-8fb2-52c3caef61c1
	I0501 03:55:34.211442   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:34.211442   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:34.211442   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:34.211442   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:34.211442   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:34.211616   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:34.212155   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:34.697471   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:34.697471   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:34.697471   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:34.697471   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:34.701275   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:34.701942   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:34.701942   13472 round_trippers.go:580]     Audit-Id: db30ab0e-42ff-4300-bb41-8a3fee366755
	I0501 03:55:34.701942   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:34.701942   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:34.701942   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:34.701942   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:34.702023   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:34.702023   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:34 GMT
	I0501 03:55:34.702219   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:35.206328   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:35.206328   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:35.206328   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:35.206328   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:35.213911   13472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 03:55:35.214221   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:35.214221   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:35.214221   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:35.214298   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:35.214298   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:35 GMT
	I0501 03:55:35.214298   13472 round_trippers.go:580]     Audit-Id: 2ec2c2c7-ef28-4541-a42d-0a6aad370283
	I0501 03:55:35.214298   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:35.214298   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:35.214385   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:35.696460   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:35.696460   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:35.696460   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:35.696460   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:35.700454   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:35.700454   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:35.700741   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:35.700741   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:35.700741   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:35 GMT
	I0501 03:55:35.700741   13472 round_trippers.go:580]     Audit-Id: b7d059fd-796b-4d4a-99fb-9ac0e9f5f9b9
	I0501 03:55:35.700741   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:35.700741   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:35.700741   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:35.700938   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:36.193858   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:36.193858   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:36.193858   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:36.193858   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:36.197748   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:36.198240   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:36.198240   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:36.198240   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:36.198240   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:36 GMT
	I0501 03:55:36.198240   13472 round_trippers.go:580]     Audit-Id: cf49d0d0-1ff3-46a6-9497-913e69e7aa37
	I0501 03:55:36.198240   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:36.198401   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:36.198401   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:36.198504   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:36.701633   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:36.701633   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:36.701633   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:36.701633   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:36.706644   13472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 03:55:36.706996   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:36.706996   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:36.706996   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:36.706996   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:36 GMT
	I0501 03:55:36.706996   13472 round_trippers.go:580]     Audit-Id: 68af8815-122e-435f-bb09-4715854dc4bf
	I0501 03:55:36.706996   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:36.706996   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:36.706996   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:36.707279   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:36.707713   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:37.193997   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:37.194060   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:37.194060   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:37.194060   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:37.198388   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:37.198459   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:37.198459   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:37.198459   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:37.198459   13472 round_trippers.go:580]     Content-Length: 4030
	I0501 03:55:37.198459   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:37 GMT
	I0501 03:55:37.198459   13472 round_trippers.go:580]     Audit-Id: a16b8a91-f672-4f37-a025-92359b906c62
	I0501 03:55:37.198459   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:37.198519   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:37.198665   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"576","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0501 03:55:37.701275   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:37.701275   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:37.701479   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:37.701479   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:37.714018   13472 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 03:55:37.714018   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:37.714018   13472 round_trippers.go:580]     Audit-Id: 7b040bc7-c5cd-4de2-9673-c37bc70e8b4c
	I0501 03:55:37.714018   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:37.714018   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:37.714018   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:37.714018   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:37.714018   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:37 GMT
	I0501 03:55:37.714018   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:38.205122   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:38.205122   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:38.205122   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:38.205122   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:38.300360   13472 round_trippers.go:574] Response Status: 200 OK in 95 milliseconds
	I0501 03:55:38.301188   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:38.301188   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:38.301188   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:38.301188   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:38.301188   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:38.301188   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:38 GMT
	I0501 03:55:38.301188   13472 round_trippers.go:580]     Audit-Id: 94f535d0-f651-41b4-be21-551a028ecf11
	I0501 03:55:38.301466   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:38.702438   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:38.702438   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:38.702438   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:38.702438   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:38.706064   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:38.706064   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:38.706064   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:38 GMT
	I0501 03:55:38.706064   13472 round_trippers.go:580]     Audit-Id: f24df947-4de8-4eda-9d08-a0873697f3e7
	I0501 03:55:38.706064   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:38.706064   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:38.706064   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:38.706064   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:38.706624   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:39.205710   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:39.205710   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:39.205710   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:39.205932   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:39.209318   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:39.209318   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:39.209318   13472 round_trippers.go:580]     Audit-Id: 8b3a4173-e040-40aa-b54a-c81b31c60cf8
	I0501 03:55:39.209318   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:39.209318   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:39.209318   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:39.209318   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:39.209318   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:39 GMT
	I0501 03:55:39.210052   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:39.210488   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:39.725665   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:39.725665   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:39.725665   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:39.726495   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:39.729682   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:39.729682   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:39.729682   13472 round_trippers.go:580]     Audit-Id: 7ac09f15-c94c-4e48-b37c-b92da1bd4074
	I0501 03:55:39.729682   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:39.729682   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:39.729682   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:39.729682   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:39.729682   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:39 GMT
	I0501 03:55:39.729682   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:40.193101   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:40.193101   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:40.193101   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:40.193244   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:40.196614   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:40.196614   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:40.197200   13472 round_trippers.go:580]     Audit-Id: ce5de4f8-b850-41e9-baea-1ad53e3f99da
	I0501 03:55:40.197200   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:40.197200   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:40.197200   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:40.197200   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:40.197200   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:40 GMT
	I0501 03:55:40.197595   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:40.694666   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:40.694666   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:40.694666   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:40.694666   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:40.698670   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:40.698965   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:40.698965   13472 round_trippers.go:580]     Audit-Id: 698c0744-0766-4b48-b1e5-d7d7de0193fd
	I0501 03:55:40.698965   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:40.698965   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:40.698965   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:40.698965   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:40.698965   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:40 GMT
	I0501 03:55:40.699268   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:41.205041   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:41.205349   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:41.205349   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:41.205349   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:41.208808   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:41.208808   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:41.208808   13472 round_trippers.go:580]     Audit-Id: 75679f3b-a593-45d3-9de7-0e11bb274430
	I0501 03:55:41.208808   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:41.209062   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:41.209062   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:41.209062   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:41.209062   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:41 GMT
	I0501 03:55:41.209206   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:41.695640   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:41.695706   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:41.695771   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:41.695771   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:41.699655   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:41.700475   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:41.700475   13472 round_trippers.go:580]     Audit-Id: d1014ef3-5257-4b05-b7d7-3a3f2eefe6a7
	I0501 03:55:41.700475   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:41.700475   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:41.700475   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:41.700475   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:41.700475   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:41 GMT
	I0501 03:55:41.700692   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:41.700802   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:42.193519   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:42.193643   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:42.193643   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:42.193643   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:42.198287   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:42.198382   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:42.198382   13472 round_trippers.go:580]     Audit-Id: 7674e25b-ef34-4bcb-b78d-f391248cbddf
	I0501 03:55:42.198382   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:42.198382   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:42.198382   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:42.198460   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:42.198460   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:42 GMT
	I0501 03:55:42.198770   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:42.701751   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:42.701751   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:42.701751   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:42.701751   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:42.709765   13472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 03:55:42.710573   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:42.710573   13472 round_trippers.go:580]     Audit-Id: a9a6d629-5959-4d2f-87b0-d1d8b5558f54
	I0501 03:55:42.710573   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:42.710573   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:42.710573   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:42.710573   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:42.710573   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:42 GMT
	I0501 03:55:42.710653   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:43.195168   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:43.195168   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:43.195168   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:43.195477   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:43.198927   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:43.199996   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:43.199996   13472 round_trippers.go:580]     Audit-Id: 992f0b68-656a-48f8-a786-7121a5b04d6c
	I0501 03:55:43.199996   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:43.199996   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:43.199996   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:43.200034   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:43.200034   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:43 GMT
	I0501 03:55:43.200219   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:43.704105   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:43.704105   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:43.704105   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:43.704105   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:43.708616   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:43.708616   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:43.708616   13472 round_trippers.go:580]     Audit-Id: 145fb75c-e77a-452a-b8d1-2a5985cd1c59
	I0501 03:55:43.708616   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:43.708616   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:43.708616   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:43.708950   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:43.708950   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:43 GMT
	I0501 03:55:43.709254   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:43.709648   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:44.203439   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:44.203439   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:44.203439   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:44.203439   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:44.210076   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:44.210076   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:44.210076   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:44.210076   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:44.210076   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:44.210076   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:44 GMT
	I0501 03:55:44.210076   13472 round_trippers.go:580]     Audit-Id: b007106b-1892-43dd-a0de-e4a02a39fb4e
	I0501 03:55:44.210076   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:44.210076   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:44.703373   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:44.703453   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:44.703453   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:44.703453   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:44.710752   13472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 03:55:44.710752   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:44.710752   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:44.710752   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:44.710752   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:44 GMT
	I0501 03:55:44.710752   13472 round_trippers.go:580]     Audit-Id: 23d546b4-38b2-4187-8791-09b8cb3acf21
	I0501 03:55:44.710752   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:44.710752   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:44.711580   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:45.206077   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:45.206077   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:45.206077   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:45.206077   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:45.209295   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:45.209295   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:45.209295   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:45.209295   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:45.209295   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:45.209295   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:45 GMT
	I0501 03:55:45.209295   13472 round_trippers.go:580]     Audit-Id: e90c2633-7862-461e-887a-43f31c5d3c0f
	I0501 03:55:45.209295   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:45.209295   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:45.706745   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:45.706745   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:45.706813   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:45.706813   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:45.710151   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:45.710151   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:45.710151   13472 round_trippers.go:580]     Audit-Id: deee342f-0b89-4d7a-aa42-1d5c2d9560b7
	I0501 03:55:45.710151   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:45.710151   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:45.710151   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:45.710640   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:45.710640   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:45 GMT
	I0501 03:55:45.710755   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:45.711463   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:46.205660   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:46.205913   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:46.205913   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:46.205913   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:46.210746   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:46.210746   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:46.210746   13472 round_trippers.go:580]     Audit-Id: 53f2150f-2f69-4521-ab86-e52590c2e0e6
	I0501 03:55:46.210746   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:46.210746   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:46.210746   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:46.210746   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:46.210746   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:46 GMT
	I0501 03:55:46.212462   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:46.701938   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:46.701938   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:46.701938   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:46.701938   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:46.705926   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:46.706652   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:46.706652   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:46.706652   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:46 GMT
	I0501 03:55:46.706652   13472 round_trippers.go:580]     Audit-Id: f9354099-b74f-492c-b62c-a8ae4aed4a97
	I0501 03:55:46.706742   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:46.706742   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:46.706742   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:46.706742   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:47.203175   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:47.203175   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:47.203175   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:47.203175   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:47.207793   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:47.208764   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:47.208764   13472 round_trippers.go:580]     Audit-Id: 89440465-60b0-4dbc-a281-e43a1d9bfe3f
	I0501 03:55:47.208764   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:47.208764   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:47.208764   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:47.208764   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:47.208764   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:47 GMT
	I0501 03:55:47.209062   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:47.702741   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:47.702741   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:47.702741   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:47.702741   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:47.708923   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:47.708923   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:47.708923   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:47.708923   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:47 GMT
	I0501 03:55:47.708923   13472 round_trippers.go:580]     Audit-Id: 54ebfda8-8a89-436d-9c77-1fd1dc504bc6
	I0501 03:55:47.708923   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:47.708923   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:47.708923   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:47.709549   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:48.204639   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:48.204639   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:48.204869   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:48.204869   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:48.208379   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:48.208379   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:48.208379   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:48.208379   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:48 GMT
	I0501 03:55:48.208379   13472 round_trippers.go:580]     Audit-Id: 54342679-93e1-4655-a90a-07353c2e5905
	I0501 03:55:48.208379   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:48.208379   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:48.208379   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:48.209378   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:48.209848   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:48.703016   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:48.703016   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:48.703127   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:48.703127   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:48.709516   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:48.709516   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:48.710218   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:48.710218   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:48 GMT
	I0501 03:55:48.710218   13472 round_trippers.go:580]     Audit-Id: 7fc3dfa4-4524-48fe-b64c-7f8be29fdabf
	I0501 03:55:48.710218   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:48.710218   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:48.710273   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:48.710273   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:49.203669   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:49.203741   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:49.203741   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:49.203741   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:49.207060   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:49.207651   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:49.207651   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:49.207651   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:49.207651   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:49 GMT
	I0501 03:55:49.207651   13472 round_trippers.go:580]     Audit-Id: 84d21c12-e7b6-47c6-b9ef-f61248bcc6ce
	I0501 03:55:49.207770   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:49.207770   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:49.208164   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:49.705744   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:49.705822   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:49.705822   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:49.705822   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:49.710288   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:49.710288   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:49.710288   13472 round_trippers.go:580]     Audit-Id: df707dca-f053-4ba2-bbf7-05d5f21740b0
	I0501 03:55:49.710288   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:49.710359   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:49.710359   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:49.710359   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:49.710359   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:49 GMT
	I0501 03:55:49.711606   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:50.205880   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:50.205962   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.205962   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.205962   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.210010   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:50.210255   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.210255   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.210255   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.210255   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.210255   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.210255   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.210255   13472 round_trippers.go:580]     Audit-Id: 92119857-8d14-4e88-93da-55607c77b485
	I0501 03:55:50.210697   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"590","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0501 03:55:50.211349   13472 node_ready.go:53] node "multinode-289800-m02" has status "Ready":"False"
	I0501 03:55:50.705623   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:50.705623   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.705623   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.705623   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.709353   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:50.709353   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.709353   13472 round_trippers.go:580]     Audit-Id: a7f6e853-2591-4346-a6f6-17d6d3077034
	I0501 03:55:50.709353   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.709353   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.709353   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.709353   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.709353   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.710496   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"615","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0501 03:55:50.710898   13472 node_ready.go:49] node "multinode-289800-m02" has status "Ready":"True"
	I0501 03:55:50.711013   13472 node_ready.go:38] duration metric: took 22.5186674s for node "multinode-289800-m02" to be "Ready" ...
	I0501 03:55:50.711013   13472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:55:50.711123   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods
	I0501 03:55:50.711217   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.711217   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.711217   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.716519   13472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 03:55:50.716519   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.716519   13472 round_trippers.go:580]     Audit-Id: 018e19f6-cf38-404a-94d2-9ef38943dfeb
	I0501 03:55:50.716519   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.716519   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.716519   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.716519   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.716519   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.719730   13472 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"615"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"407","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 78059 chars]
	I0501 03:55:50.724215   13472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.724461   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 03:55:50.724461   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.724522   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.724542   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.727734   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:50.728157   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.728157   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.728157   13472 round_trippers.go:580]     Audit-Id: d3592365-ad0c-47e1-baf6-8efd90d84ee7
	I0501 03:55:50.728157   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.728157   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.728157   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.728157   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.728538   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"407","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0501 03:55:50.729488   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:50.729566   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.729566   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.729566   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.731445   13472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 03:55:50.731445   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.731445   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.731445   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.731445   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.731445   13472 round_trippers.go:580]     Audit-Id: 90fc4446-ae47-4845-af40-9519b0152d07
	I0501 03:55:50.731445   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.731445   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.732499   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:50.733062   13472 pod_ready.go:92] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:50.733062   13472 pod_ready.go:81] duration metric: took 8.7839ms for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.733062   13472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.733180   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x9zrw
	I0501 03:55:50.733180   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.733180   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.733180   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.736476   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:55:50.736476   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.736476   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.736476   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.736476   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.736476   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.736476   13472 round_trippers.go:580]     Audit-Id: 0028a09f-ac3f-43a5-8ca9-1bddc0981f2b
	I0501 03:55:50.736476   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.736671   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x9zrw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0b91b14d-bed3-4889-b193-db53daccd395","resourceVersion":"403","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0501 03:55:50.737246   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:50.737325   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.737325   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.737325   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.739694   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:55:50.740434   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.740434   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.740434   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.740434   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.740434   13472 round_trippers.go:580]     Audit-Id: 86b937c9-d8ca-41a8-b0a9-d2bbfab374de
	I0501 03:55:50.740434   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.740434   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.740736   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:50.741193   13472 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:50.741193   13472 pod_ready.go:81] duration metric: took 8.1303ms for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.741193   13472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.741343   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-289800
	I0501 03:55:50.741343   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.741393   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.741393   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.743502   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:55:50.743502   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.743502   13472 round_trippers.go:580]     Audit-Id: 89f58f37-75e0-4959-a307-6af5ad06ecee
	I0501 03:55:50.743502   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.743502   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.743502   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.743502   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.743502   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.744524   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"96a8cf0b-45bc-4636-9264-a0da579b5fa8","resourceVersion":"278","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.152:2379","kubernetes.io/config.hash":"c17e9f88f256f5527a6565eb2da75f63","kubernetes.io/config.mirror":"c17e9f88f256f5527a6565eb2da75f63","kubernetes.io/config.seen":"2024-05-01T03:52:15.688756845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0501 03:55:50.745079   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:50.745079   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.745143   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.745143   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.747660   13472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 03:55:50.747660   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.747660   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.747660   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.747660   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.747660   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.747660   13472 round_trippers.go:580]     Audit-Id: cbbc1094-f496-4cd0-8408-ee1dd08112fb
	I0501 03:55:50.747660   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.747660   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:50.747660   13472 pod_ready.go:92] pod "etcd-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:50.747660   13472 pod_ready.go:81] duration metric: took 6.4671ms for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.747660   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.748693   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-289800
	I0501 03:55:50.748693   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.748693   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.748693   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.751939   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:50.752366   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.752366   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.752366   13472 round_trippers.go:580]     Audit-Id: 2b68f7db-dd5b-4f13-a24e-025d4254b014
	I0501 03:55:50.752366   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.752366   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.752366   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.752366   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.752598   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-289800","namespace":"kube-system","uid":"a1b99f2b-8aed-4037-956a-13bde4551a72","resourceVersion":"311","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.209.152:8443","kubernetes.io/config.hash":"fc7b6f2a7c826774b66af910f598e965","kubernetes.io/config.mirror":"fc7b6f2a7c826774b66af910f598e965","kubernetes.io/config.seen":"2024-05-01T03:52:15.688762545Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0501 03:55:50.753729   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:50.753729   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.753729   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.753807   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.757132   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:50.757132   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.757132   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.757132   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.757592   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.757592   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.757592   13472 round_trippers.go:580]     Audit-Id: f534656e-284a-4b65-9202-da9799798906
	I0501 03:55:50.757592   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.757783   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:50.758068   13472 pod_ready.go:92] pod "kube-apiserver-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:50.758223   13472 pod_ready.go:81] duration metric: took 10.5629ms for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.758340   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:50.920810   13472 request.go:629] Waited for 162.2608ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 03:55:50.921102   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 03:55:50.921195   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:50.921195   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:50.921195   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:50.925612   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:50.925910   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:50.925910   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:50.925910   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:50.925910   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:50 GMT
	I0501 03:55:50.925910   13472 round_trippers.go:580]     Audit-Id: 39e7f79e-d716-4472-ae03-cdfc7a11c197
	I0501 03:55:50.925910   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:50.925910   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:50.926371   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-289800","namespace":"kube-system","uid":"fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318","resourceVersion":"283","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.mirror":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.seen":"2024-05-01T03:52:15.688763845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0501 03:55:51.108220   13472 request.go:629] Waited for 180.4826ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:51.108456   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:51.108456   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:51.108615   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:51.108686   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:51.112112   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:51.112480   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:51.112480   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:51.112480   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:51 GMT
	I0501 03:55:51.112480   13472 round_trippers.go:580]     Audit-Id: e6c8c928-efd4-4b91-804a-30e5cbe36748
	I0501 03:55:51.112480   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:51.112480   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:51.112480   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:51.112606   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:51.113209   13472 pod_ready.go:92] pod "kube-controller-manager-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:51.113303   13472 pod_ready.go:81] duration metric: took 354.8661ms for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:51.113303   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:51.310678   13472 request.go:629] Waited for 197.008ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 03:55:51.310873   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 03:55:51.310873   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:51.310873   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:51.310873   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:51.315626   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:51.315626   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:51.315626   13472 round_trippers.go:580]     Audit-Id: eab5d950-c9bf-4399-8ba3-1e1ef4795c0f
	I0501 03:55:51.315626   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:51.315626   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:51.315626   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:51.315626   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:51.315626   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:51 GMT
	I0501 03:55:51.316214   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bp9zx","generateName":"kube-proxy-","namespace":"kube-system","uid":"aba82e50-b8f8-40b4-b08a-6d045314d6b6","resourceVersion":"356","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0501 03:55:51.514043   13472 request.go:629] Waited for 197.6152ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:51.514587   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:51.514587   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:51.514587   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:51.514669   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:51.519007   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:51.519007   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:51.519007   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:51 GMT
	I0501 03:55:51.519007   13472 round_trippers.go:580]     Audit-Id: c61f8195-5c4b-4cab-8bc9-ce25ad389852
	I0501 03:55:51.519007   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:51.519007   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:51.519007   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:51.519007   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:51.519444   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:51.519983   13472 pod_ready.go:92] pod "kube-proxy-bp9zx" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:51.519983   13472 pod_ready.go:81] duration metric: took 406.6778ms for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:51.520044   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:51.716068   13472 request.go:629] Waited for 195.7213ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 03:55:51.716068   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 03:55:51.716068   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:51.716068   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:51.716068   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:51.719887   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:51.720948   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:51.720948   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:51.720948   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:51.720948   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:51.721034   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:51 GMT
	I0501 03:55:51.721034   13472 round_trippers.go:580]     Audit-Id: a22c48d9-52fc-41b4-ab11-df569dd6e3f7
	I0501 03:55:51.721034   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:51.721206   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rlzp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b37d8d5d-a7cb-4848-a8a2-11d9761e08d6","resourceVersion":"596","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0501 03:55:51.918508   13472 request.go:629] Waited for 196.2124ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:51.918761   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800-m02
	I0501 03:55:51.918761   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:51.918761   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:51.918761   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:51.922615   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:51.922615   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:51.922615   13472 round_trippers.go:580]     Audit-Id: a761e9eb-50b0-4ffb-99d1-ae60458583ff
	I0501 03:55:51.922615   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:51.922615   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:51.922615   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:51.922615   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:51.922615   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:51 GMT
	I0501 03:55:51.923550   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"615","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0501 03:55:51.923963   13472 pod_ready.go:92] pod "kube-proxy-rlzp8" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:51.924033   13472 pod_ready.go:81] duration metric: took 403.9863ms for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:51.924033   13472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:52.105956   13472 request.go:629] Waited for 181.6855ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 03:55:52.106332   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 03:55:52.106332   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:52.106332   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:52.106332   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:52.110241   13472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 03:55:52.110628   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:52.110628   13472 round_trippers.go:580]     Audit-Id: be63836b-3301-4a04-8200-8a4b1f3f6fd6
	I0501 03:55:52.110628   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:52.110628   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:52.110628   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:52.110628   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:52.110628   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:52 GMT
	I0501 03:55:52.110701   13472 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-289800","namespace":"kube-system","uid":"c7518f03-993b-432f-b742-8805dd2167a7","resourceVersion":"280","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.mirror":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.seen":"2024-05-01T03:52:15.688771544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0501 03:55:52.309350   13472 request.go:629] Waited for 197.6531ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:52.309662   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes/multinode-289800
	I0501 03:55:52.309662   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:52.309662   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:52.309662   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:52.316270   13472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 03:55:52.316270   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:52.316270   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:52.316270   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:52.316270   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:52 GMT
	I0501 03:55:52.316270   13472 round_trippers.go:580]     Audit-Id: de8ee2ae-fe27-41a8-8def-665e68df5868
	I0501 03:55:52.316270   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:52.316270   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:52.316820   13472 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","fi [truncated 4960 chars]
	I0501 03:55:52.316970   13472 pod_ready.go:92] pod "kube-scheduler-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 03:55:52.316970   13472 pod_ready.go:81] duration metric: took 392.9342ms for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 03:55:52.316970   13472 pod_ready.go:38] duration metric: took 1.6059447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 03:55:52.316970   13472 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 03:55:52.333961   13472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 03:55:52.363198   13472 system_svc.go:56] duration metric: took 46.2272ms WaitForService to wait for kubelet
	I0501 03:55:52.363198   13472 kubeadm.go:576] duration metric: took 24.4622043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 03:55:52.363198   13472 node_conditions.go:102] verifying NodePressure condition ...
	I0501 03:55:52.513107   13472 request.go:629] Waited for 149.5882ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.152:8443/api/v1/nodes
	I0501 03:55:52.513215   13472 round_trippers.go:463] GET https://172.28.209.152:8443/api/v1/nodes
	I0501 03:55:52.513253   13472 round_trippers.go:469] Request Headers:
	I0501 03:55:52.513253   13472 round_trippers.go:473]     Accept: application/json, */*
	I0501 03:55:52.513253   13472 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 03:55:52.517993   13472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 03:55:52.517993   13472 round_trippers.go:577] Response Headers:
	I0501 03:55:52.517993   13472 round_trippers.go:580]     Audit-Id: b785b765-1faf-4390-b9e7-7658ca3caddb
	I0501 03:55:52.517993   13472 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 03:55:52.518525   13472 round_trippers.go:580]     Content-Type: application/json
	I0501 03:55:52.518525   13472 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 03:55:52.518525   13472 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 03:55:52.518525   13472 round_trippers.go:580]     Date: Wed, 01 May 2024 03:55:52 GMT
	I0501 03:55:52.518982   13472 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"616"},"items":[{"metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"418","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9269 chars]
	I0501 03:55:52.519707   13472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:55:52.519767   13472 node_conditions.go:123] node cpu capacity is 2
	I0501 03:55:52.519767   13472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 03:55:52.519827   13472 node_conditions.go:123] node cpu capacity is 2
	I0501 03:55:52.519827   13472 node_conditions.go:105] duration metric: took 156.6284ms to run NodePressure ...
	I0501 03:55:52.519827   13472 start.go:240] waiting for startup goroutines ...
	I0501 03:55:52.519888   13472 start.go:254] writing updated cluster config ...
	I0501 03:55:52.534633   13472 ssh_runner.go:195] Run: rm -f paused
	I0501 03:55:52.786976   13472 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 03:55:52.793751   13472 out.go:177] * Done! kubectl is now configured to use "multinode-289800" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 01 03:52:40 multinode-289800 cri-dockerd[1236]: time="2024-05-01T03:52:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88/resolv.conf as [nameserver 172.28.208.1]"
	May 01 03:52:40 multinode-289800 cri-dockerd[1236]: time="2024-05-01T03:52:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193/resolv.conf as [nameserver 172.28.208.1]"
	May 01 03:52:40 multinode-289800 cri-dockerd[1236]: time="2024-05-01T03:52:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8/resolv.conf as [nameserver 172.28.208.1]"
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.715099617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.717283313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.717677212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.718277111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.778289710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.778518910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.778613309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.778775409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.830587122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.830840321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.830853921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:52:40 multinode-289800 dockerd[1336]: time="2024-05-01T03:52:40.831482620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:56:17 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:17.898167510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:56:17 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:17.898413110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:56:17 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:17.898436710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:56:17 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:17.898597911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:56:18 multinode-289800 cri-dockerd[1236]: time="2024-05-01T03:56:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 01 03:56:19 multinode-289800 cri-dockerd[1236]: time="2024-05-01T03:56:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 01 03:56:19 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:19.582622503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 03:56:19 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:19.582703904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 03:56:19 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:19.582722604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 03:56:19 multinode-289800 dockerd[1336]: time="2024-05-01T03:56:19.582828505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   49 seconds ago      Running             busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	15c4496e3a9f0       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	ee2238f98e350       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   9971ef577f2f8       storage-provisioner
	3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	502684407b0cf       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	3244d1ee5ab42       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   a338ea43bd9b0       etcd-multinode-289800
	4b62556f40bec       c7aad43836fa5                                                                                         5 minutes ago       Running             kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	bbbe9bf276852       c42f13656d0b2                                                                                         5 minutes ago       Running             kube-apiserver            0                   976a9ff433ccb       kube-apiserver-multinode-289800
	06f1f84bfde17       259c8277fcbbc                                                                                         5 minutes ago       Running             kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	
	
	==> coredns [15c4496e3a9f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	[INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	[INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	[INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	[INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	[INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	[INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	[INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	[INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	[INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	[INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	[INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	[INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	[INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	[INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	[INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	[INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	[INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	[INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	
	
	==> coredns [3e8d5ff9a9e4] <==
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	[INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	[INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	[INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	[INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	[INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	[INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	[INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	[INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	[INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	[INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	[INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	[INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	[INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	[INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	[INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	[INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	[INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	[INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	[INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	[INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	[INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	
	
	==> describe nodes <==
	Name:               multinode-289800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-289800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-289800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-289800
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:57:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:56:52 +0000   Wed, 01 May 2024 03:52:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.209.152
	  Hostname:    multinode-289800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c40b9569c9a64effb4842466f948a2b2
	  System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	  Boot ID:                    b5984120-f3ba-49ce-863e-3c58e68c86ae
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cc6mk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m39s
	  kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m39s
	  kube-system                 etcd-multinode-289800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kindnet-vcxkr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m39s
	  kube-system                 kube-apiserver-multinode-289800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-multinode-289800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-bp9zx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-scheduler-multinode-289800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m37s  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m1s   kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m53s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m53s  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m53s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m40s  node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	  Normal  NodeReady                4m29s  kubelet          Node multinode-289800 status is now: NodeReady
	
	
	Name:               multinode-289800-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-289800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-289800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-289800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 03:56:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 03:56:28 +0000   Wed, 01 May 2024 03:55:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 03:56:28 +0000   Wed, 01 May 2024 03:55:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 03:56:28 +0000   Wed, 01 May 2024 03:55:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 03:56:28 +0000   Wed, 01 May 2024 03:55:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.219.162
	  Hostname:    multinode-289800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	  System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	  Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbxxx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-gzz7p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      101s
	  kube-system                 kube-proxy-rlzp8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x2 over 101s)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x2 over 101s)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x2 over 101s)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	  Normal  NodeReady                78s                  kubelet          Node multinode-289800-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.467833] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 1 03:51] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.204084] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[ +31.247013] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.112196] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.624167] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +0.237861] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.266160] systemd-fstab-generator[1019]: Ignoring "noauto" option for root device
	[  +2.884479] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.220001] systemd-fstab-generator[1201]: Ignoring "noauto" option for root device
	[  +0.202738] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.297082] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[ +11.631994] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.123596] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.826012] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[May 1 03:52] systemd-fstab-generator[1723]: Ignoring "noauto" option for root device
	[  +0.120554] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.576386] systemd-fstab-generator[2125]: Ignoring "noauto" option for root device
	[  +0.179510] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.598982] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +0.198877] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.223311] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.206321] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [3244d1ee5ab4] <==
	{"level":"info","ts":"2024-05-01T03:52:09.627302Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.152:2380"}
	{"level":"info","ts":"2024-05-01T03:52:09.650156Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.152:2380"}
	{"level":"info","ts":"2024-05-01T03:52:10.114063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-01T03:52:10.11425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-01T03:52:10.114372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 1"}
	{"level":"info","ts":"2024-05-01T03:52:10.114509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 2"}
	{"level":"info","ts":"2024-05-01T03:52:10.114594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 2"}
	{"level":"info","ts":"2024-05-01T03:52:10.114651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 2"}
	{"level":"info","ts":"2024-05-01T03:52:10.114751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 2"}
	{"level":"info","ts":"2024-05-01T03:52:10.125667Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:52:10.132537Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.152:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T03:52:10.132658Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:52:10.136151Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T03:52:10.136265Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T03:52:10.136369Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:52:10.136491Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:52:10.136594Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T03:52:10.132765Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T03:52:10.152493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.152:2379"}
	{"level":"info","ts":"2024-05-01T03:52:10.168256Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T03:52:37.128109Z","caller":"traceutil/trace.go:171","msg":"trace[441468184] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"300.760555ms","start":"2024-05-01T03:52:36.827325Z","end":"2024-05-01T03:52:37.128085Z","steps":["trace[441468184] 'process raft request'  (duration: 202.533136ms)","trace[441468184] 'compare'  (duration: 97.821821ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-01T03:52:37.129609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-01T03:52:36.827309Z","time spent":"300.881455ms","remote":"127.0.0.1:34108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":706,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" value_size:641 >> failure:<>"}
	{"level":"info","ts":"2024-05-01T03:52:59.479699Z","caller":"traceutil/trace.go:171","msg":"trace[610102705] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"100.567645ms","start":"2024-05-01T03:52:59.379114Z","end":"2024-05-01T03:52:59.479682Z","steps":["trace[610102705] 'process raft request'  (duration: 100.136147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-01T03:55:19.981144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.524477ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15088905047670271853 > lease_revoke:<id:51668f3247fc2f27>","response":"size:29"}
	{"level":"info","ts":"2024-05-01T03:55:20.972645Z","caller":"traceutil/trace.go:171","msg":"trace[835866321] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"104.957859ms","start":"2024-05-01T03:55:20.867655Z","end":"2024-05-01T03:55:20.972613Z","steps":["trace[835866321] 'process raft request'  (duration: 104.737259ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:57:08 up 7 min,  0 users,  load average: 0.22, 0.46, 0.27
	Linux multinode-289800 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d5f881ef398] <==
	I0501 03:56:08.604686       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 03:56:18.613894       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 03:56:18.614039       1 main.go:227] handling current node
	I0501 03:56:18.614056       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 03:56:18.614065       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 03:56:28.620669       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 03:56:28.620870       1 main.go:227] handling current node
	I0501 03:56:28.620914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 03:56:28.621071       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 03:56:38.627120       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 03:56:38.627227       1 main.go:227] handling current node
	I0501 03:56:38.627254       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 03:56:38.627263       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 03:56:48.635237       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 03:56:48.635338       1 main.go:227] handling current node
	I0501 03:56:48.635355       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 03:56:48.635380       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 03:56:58.649480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 03:56:58.649509       1 main.go:227] handling current node
	I0501 03:56:58.650159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 03:56:58.650201       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 03:57:08.668523       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 03:57:08.669121       1 main.go:227] handling current node
	I0501 03:57:08.669297       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 03:57:08.669334       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [bbbe9bf27685] <==
	I0501 03:52:13.084840       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0501 03:52:13.093152       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0501 03:52:13.093369       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 03:52:14.348727       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 03:52:14.442933       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 03:52:14.607110       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0501 03:52:14.624497       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.152]
	I0501 03:52:14.625608       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 03:52:14.634669       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 03:52:15.210763       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 03:52:15.603288       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 03:52:15.627503       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0501 03:52:15.659929       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 03:52:29.197551       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0501 03:52:29.516194       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0501 03:56:22.788194       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62255: use of closed network connection
	E0501 03:56:23.336311       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62257: use of closed network connection
	E0501 03:56:23.978639       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62259: use of closed network connection
	E0501 03:56:24.550283       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62261: use of closed network connection
	E0501 03:56:25.068967       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62263: use of closed network connection
	E0501 03:56:25.619551       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62265: use of closed network connection
	E0501 03:56:26.598687       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62268: use of closed network connection
	E0501 03:56:37.130685       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62270: use of closed network connection
	E0501 03:56:37.653734       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62273: use of closed network connection
	E0501 03:56:48.199739       1 conn.go:339] Error on socket receive: read tcp 172.28.209.152:8443->172.28.208.1:62275: use of closed network connection
	
	
	==> kube-controller-manager [4b62556f40be] <==
	I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	
	
	==> kube-proxy [502684407b0c] <==
	I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [06f1f84bfde1] <==
	W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 03:52:41 multinode-289800 kubelet[2132]: I0501 03:52:41.568627    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.568607519 podStartE2EDuration="4.568607519s" podCreationTimestamp="2024-05-01 03:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 03:52:41.568219021 +0000 UTC m=+26.050149519" watchObservedRunningTime="2024-05-01 03:52:41.568607519 +0000 UTC m=+26.050538017"
	May 01 03:52:41 multinode-289800 kubelet[2132]: I0501 03:52:41.630049    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podStartSLOduration=12.629978795 podStartE2EDuration="12.629978795s" podCreationTimestamp="2024-05-01 03:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 03:52:41.593722245 +0000 UTC m=+26.075652743" watchObservedRunningTime="2024-05-01 03:52:41.629978795 +0000 UTC m=+26.111909293"
	May 01 03:53:15 multinode-289800 kubelet[2132]: E0501 03:53:15.809445    2132 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:53:15 multinode-289800 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:53:15 multinode-289800 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:53:15 multinode-289800 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:53:15 multinode-289800 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:54:15 multinode-289800 kubelet[2132]: E0501 03:54:15.809816    2132 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:54:15 multinode-289800 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:54:15 multinode-289800 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:54:15 multinode-289800 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:54:15 multinode-289800 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:55:15 multinode-289800 kubelet[2132]: E0501 03:55:15.810106    2132 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:55:15 multinode-289800 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:55:15 multinode-289800 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:55:15 multinode-289800 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:55:15 multinode-289800 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:56:15 multinode-289800 kubelet[2132]: E0501 03:56:15.808539    2132 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 03:56:15 multinode-289800 kubelet[2132]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 03:56:15 multinode-289800 kubelet[2132]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 03:56:15 multinode-289800 kubelet[2132]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 03:56:15 multinode-289800 kubelet[2132]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 03:56:17 multinode-289800 kubelet[2132]: I0501 03:56:17.347673    2132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podStartSLOduration=228.3476535 podStartE2EDuration="3m48.3476535s" podCreationTimestamp="2024-05-01 03:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 03:52:41.681270241 +0000 UTC m=+26.163200739" watchObservedRunningTime="2024-05-01 03:56:17.3476535 +0000 UTC m=+241.829583998"
	May 01 03:56:17 multinode-289800 kubelet[2132]: I0501 03:56:17.348717    2132 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	May 01 03:56:17 multinode-289800 kubelet[2132]: I0501 03:56:17.470069    2132 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r64v\" (UniqueName: \"kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v\") pod \"busybox-fc5497c4f-cc6mk\" (UID: \"7f61e6ee-cf9a-4903-ba51-2a3b6804717f\") " pod="default/busybox-fc5497c4f-cc6mk"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:57:00.472686    9692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-289800 -n multinode-289800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-289800 -n multinode-289800: (12.121004s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-289800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (488.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-289800
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-289800
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-289800: (1m40.7087008s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-289800 --wait=true -v=8 --alsologtostderr
E0501 04:13:38.027656   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 04:16:34.994160   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 04:17:58.254043   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 04:18:38.028542   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-289800 --wait=true -v=8 --alsologtostderr: exit status 1 (5m36.6621992s)

                                                
                                                
-- stdout --
	* [multinode-289800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-289800" primary control-plane node in "multinode-289800" cluster
	* Restarting existing hyperv VM for "multinode-289800" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-289800-m02" worker node in "multinode-289800" cluster
	* Restarting existing hyperv VM for "multinode-289800-m02" ...
	* Found network options:
	  - NO_PROXY=172.28.209.199
	  - NO_PROXY=172.28.209.199

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:13:31.203447    4352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 04:13:31.288320    4352 out.go:291] Setting OutFile to fd 940 ...
	I0501 04:13:31.288947    4352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:13:31.289022    4352 out.go:304] Setting ErrFile to fd 872...
	I0501 04:13:31.289022    4352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:13:31.317764    4352 out.go:298] Setting JSON to false
	I0501 04:13:31.321501    4352 start.go:129] hostinfo: {"hostname":"minikube6","uptime":109865,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 04:13:31.321501    4352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 04:13:31.486610    4352 out.go:177] * [multinode-289800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 04:13:31.500668    4352 notify.go:220] Checking for updates...
	I0501 04:13:31.647903    4352 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:13:31.864863    4352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:13:32.043046    4352 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 04:13:32.130520    4352 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:13:32.227582    4352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:13:32.391630    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:13:32.391885    4352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:13:37.854108    4352 out.go:177] * Using the hyperv driver based on existing profile
	I0501 04:13:37.857331    4352 start.go:297] selected driver: hyperv
	I0501 04:13:37.857446    4352 start.go:901] validating driver "hyperv" against &{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:13:37.857707    4352 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:13:37.924974    4352 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 04:13:37.925065    4352 cni.go:84] Creating CNI manager for ""
	I0501 04:13:37.925065    4352 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 04:13:37.925303    4352 start.go:340] cluster config:
	{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:13:37.925717    4352 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:13:37.937898    4352 out.go:177] * Starting "multinode-289800" primary control-plane node in "multinode-289800" cluster
	I0501 04:13:37.942400    4352 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:13:37.943382    4352 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 04:13:37.943482    4352 cache.go:56] Caching tarball of preloaded images
	I0501 04:13:37.943655    4352 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 04:13:37.944011    4352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 04:13:37.944211    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:13:37.947189    4352 start.go:360] acquireMachinesLock for multinode-289800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:13:37.947418    4352 start.go:364] duration metric: took 229.5µs to acquireMachinesLock for "multinode-289800"
	I0501 04:13:37.947418    4352 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:13:37.947418    4352 fix.go:54] fixHost starting: 
	I0501 04:13:37.948120    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:40.670202    4352 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 04:13:40.670771    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:40.670771    4352 fix.go:112] recreateIfNeeded on multinode-289800: state=Stopped err=<nil>
	W0501 04:13:40.670942    4352 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:13:40.678157    4352 out.go:177] * Restarting existing hyperv VM for "multinode-289800" ...
	I0501 04:13:40.681664    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-289800
	I0501 04:13:43.752436    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:43.752436    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:43.752436    4352 main.go:141] libmachine: Waiting for host to start...
	I0501 04:13:43.752538    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:45.940331    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:13:45.940331    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:45.940433    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:13:48.396560    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:48.396560    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:49.407903    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:51.581304    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:13:51.581480    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:51.581575    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:13:54.138280    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:54.138280    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:55.145649    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:57.281580    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:13:57.282165    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:57.282290    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:13:59.773215    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:59.773215    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:00.787459    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:02.974363    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:02.974363    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:02.974363    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:05.527451    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:14:05.527451    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:06.536170    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:08.686994    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:08.687999    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:08.688119    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:11.254131    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:11.254131    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:11.257032    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:13.353414    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:13.354024    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:13.354024    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:15.869222    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:15.869222    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:15.869705    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:14:15.872177    4352 machine.go:94] provisionDockerMachine start ...
	I0501 04:14:15.872390    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:17.976735    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:17.976838    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:17.976838    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:20.550671    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:20.550671    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:20.557921    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:20.558543    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:20.558708    4352 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:14:20.688461    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:14:20.688525    4352 buildroot.go:166] provisioning hostname "multinode-289800"
	I0501 04:14:20.688588    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:22.841376    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:22.841376    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:22.841376    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:25.366118    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:25.366118    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:25.372321    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:25.372682    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:25.372819    4352 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-289800 && echo "multinode-289800" | sudo tee /etc/hostname
	I0501 04:14:25.534851    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-289800
	
	I0501 04:14:25.535124    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:27.621237    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:27.621410    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:27.621495    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:30.206576    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:30.206576    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:30.214870    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:30.215449    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:30.215449    4352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-289800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-289800/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-289800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:14:30.374292    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:14:30.374292    4352 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:14:30.374292    4352 buildroot.go:174] setting up certificates
	I0501 04:14:30.374292    4352 provision.go:84] configureAuth start
	I0501 04:14:30.374292    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:32.472085    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:32.472385    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:32.472385    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:34.988029    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:34.988029    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:34.988541    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:37.075640    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:37.075640    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:37.075810    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:39.576995    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:39.577255    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:39.577255    4352 provision.go:143] copyHostCerts
	I0501 04:14:39.577255    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 04:14:39.577255    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 04:14:39.577255    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 04:14:39.577853    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 04:14:39.579132    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 04:14:39.579491    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 04:14:39.579491    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 04:14:39.579491    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 04:14:39.580823    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 04:14:39.580823    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 04:14:39.580823    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 04:14:39.581410    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 04:14:39.582360    4352 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-289800 san=[127.0.0.1 172.28.209.199 localhost minikube multinode-289800]
	I0501 04:14:39.718225    4352 provision.go:177] copyRemoteCerts
	I0501 04:14:39.731115    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:14:39.731115    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:41.855991    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:41.856471    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:41.856471    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:44.416880    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:44.416880    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:44.418136    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:14:44.535525    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8043742s)
	I0501 04:14:44.535525    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 04:14:44.536479    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:14:44.588410    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 04:14:44.588497    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0501 04:14:44.640732    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 04:14:44.641009    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 04:14:44.692089    4352 provision.go:87] duration metric: took 14.3176884s to configureAuth
	I0501 04:14:44.692089    4352 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:14:44.692366    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:14:44.692366    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:46.768804    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:46.768804    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:46.768907    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:49.299376    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:49.299992    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:49.306589    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:49.306745    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:49.306745    4352 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 04:14:49.450631    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 04:14:49.450934    4352 buildroot.go:70] root file system type: tmpfs
	I0501 04:14:49.451237    4352 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 04:14:49.451237    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:51.572015    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:51.572132    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:51.572455    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:54.196490    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:54.196490    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:54.202599    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:54.203382    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:54.203382    4352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 04:14:54.381919    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 04:14:54.382458    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:56.475679    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:56.475679    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:56.475679    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:59.008395    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:59.008395    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:59.014390    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:59.014390    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:59.014390    4352 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 04:15:01.616721    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 04:15:01.616721    4352 machine.go:97] duration metric: took 45.744108s to provisionDockerMachine
	I0501 04:15:01.616721    4352 start.go:293] postStartSetup for "multinode-289800" (driver="hyperv")
	I0501 04:15:01.616721    4352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:15:01.631485    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:15:01.631485    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:03.734156    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:03.734250    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:03.734250    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:06.289808    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:06.296300    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:06.297326    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:15:06.408676    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7771539s)
	I0501 04:15:06.426553    4352 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:15:06.436535    4352 command_runner.go:130] > NAME=Buildroot
	I0501 04:15:06.436535    4352 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 04:15:06.436535    4352 command_runner.go:130] > ID=buildroot
	I0501 04:15:06.436535    4352 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 04:15:06.436535    4352 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 04:15:06.436688    4352 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:15:06.436688    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:15:06.437006    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:15:06.437786    4352 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:15:06.437786    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 04:15:06.453838    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:15:06.476226    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:15:06.526513    4352 start.go:296] duration metric: took 4.9097546s for postStartSetup
	I0501 04:15:06.526734    4352 fix.go:56] duration metric: took 1m28.5786431s for fixHost
	I0501 04:15:06.526734    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:08.628233    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:08.628233    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:08.628233    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:11.200675    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:11.200675    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:11.207510    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:15:11.207814    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:15:11.207814    4352 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 04:15:11.350053    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714536911.337265550
	
	I0501 04:15:11.350053    4352 fix.go:216] guest clock: 1714536911.337265550
	I0501 04:15:11.350053    4352 fix.go:229] Guest: 2024-05-01 04:15:11.33726555 +0000 UTC Remote: 2024-05-01 04:15:06.5267349 +0000 UTC m=+95.430511901 (delta=4.81053065s)
	I0501 04:15:11.350168    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:13.448320    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:13.448320    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:13.448626    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:15.947081    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:15.947829    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:15.955347    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:15:15.956092    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:15:15.956092    4352 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714536911
	I0501 04:15:16.107631    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:15:11 UTC 2024
	
	I0501 04:15:16.107631    4352 fix.go:236] clock set: Wed May  1 04:15:11 UTC 2024
	 (err=<nil>)
	I0501 04:15:16.107631    4352 start.go:83] releasing machines lock for "multinode-289800", held for 1m38.1594665s
	I0501 04:15:16.108173    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:18.200936    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:18.201521    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:18.201521    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:20.731957    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:20.731957    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:20.736394    4352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:15:20.736928    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:20.748881    4352 ssh_runner.go:195] Run: cat /version.json
	I0501 04:15:20.748881    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:22.934696    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:22.934696    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:22.935403    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:22.963657    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:22.963657    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:22.964039    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:25.608268    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:25.608268    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:25.609188    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:15:25.636508    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:25.636508    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:25.636508    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:15:25.714513    4352 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0501 04:15:25.714726    4352 ssh_runner.go:235] Completed: cat /version.json: (4.9657508s)
	I0501 04:15:25.730428    4352 ssh_runner.go:195] Run: systemctl --version
	I0501 04:15:25.793949    4352 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 04:15:25.794001    4352 command_runner.go:130] > systemd 252 (252)
	I0501 04:15:25.794001    4352 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0501 04:15:25.794001    4352 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0575698s)
	I0501 04:15:25.808805    4352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 04:15:25.817742    4352 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0501 04:15:25.818374    4352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:15:25.832513    4352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:15:25.863279    4352 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0501 04:15:25.863947    4352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:15:25.863947    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:15:25.863947    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:15:25.902209    4352 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 04:15:25.915429    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 04:15:25.950406    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:15:25.971423    4352 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:15:25.985607    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:15:26.021090    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:15:26.056538    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:15:26.091668    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:15:26.126978    4352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:15:26.160769    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:15:26.196167    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:15:26.231301    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:15:26.268795    4352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:15:26.288239    4352 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 04:15:26.302228    4352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:15:26.335892    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:26.546990    4352 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:15:26.581553    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:15:26.595536    4352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:15:26.622168    4352 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 04:15:26.622317    4352 command_runner.go:130] > [Unit]
	I0501 04:15:26.622317    4352 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 04:15:26.622317    4352 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 04:15:26.622317    4352 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 04:15:26.622317    4352 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 04:15:26.622389    4352 command_runner.go:130] > StartLimitBurst=3
	I0501 04:15:26.622389    4352 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 04:15:26.622389    4352 command_runner.go:130] > [Service]
	I0501 04:15:26.622389    4352 command_runner.go:130] > Type=notify
	I0501 04:15:26.622389    4352 command_runner.go:130] > Restart=on-failure
	I0501 04:15:26.622444    4352 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 04:15:26.622444    4352 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 04:15:26.622444    4352 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 04:15:26.622490    4352 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 04:15:26.622490    4352 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 04:15:26.622490    4352 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 04:15:26.622490    4352 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 04:15:26.622553    4352 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 04:15:26.622553    4352 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 04:15:26.622553    4352 command_runner.go:130] > ExecStart=
	I0501 04:15:26.622651    4352 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 04:15:26.622651    4352 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 04:15:26.622651    4352 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 04:15:26.622721    4352 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 04:15:26.622721    4352 command_runner.go:130] > LimitNOFILE=infinity
	I0501 04:15:26.622721    4352 command_runner.go:130] > LimitNPROC=infinity
	I0501 04:15:26.622721    4352 command_runner.go:130] > LimitCORE=infinity
	I0501 04:15:26.622721    4352 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 04:15:26.622721    4352 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 04:15:26.622781    4352 command_runner.go:130] > TasksMax=infinity
	I0501 04:15:26.622781    4352 command_runner.go:130] > TimeoutStartSec=0
	I0501 04:15:26.622781    4352 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 04:15:26.622781    4352 command_runner.go:130] > Delegate=yes
	I0501 04:15:26.622781    4352 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 04:15:26.622833    4352 command_runner.go:130] > KillMode=process
	I0501 04:15:26.622833    4352 command_runner.go:130] > [Install]
	I0501 04:15:26.622833    4352 command_runner.go:130] > WantedBy=multi-user.target
	I0501 04:15:26.637102    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:15:26.672868    4352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:15:26.719884    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:15:26.761043    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:15:26.801622    4352 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 04:15:26.865354    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:15:26.892052    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:15:26.928130    4352 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 04:15:26.943045    4352 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:15:26.949649    4352 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 04:15:26.964818    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:15:26.985039    4352 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:15:27.034241    4352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:15:27.252882    4352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:15:27.457917    4352 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:15:27.458072    4352 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:15:27.511496    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:27.734212    4352 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 04:15:30.421940    4352 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6877079s)
	I0501 04:15:30.435945    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 04:15:30.476284    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 04:15:30.521712    4352 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 04:15:30.745880    4352 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 04:15:30.955633    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:31.163514    4352 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 04:15:31.208353    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 04:15:31.247906    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:31.465061    4352 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 04:15:31.581899    4352 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 04:15:31.594899    4352 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 04:15:31.604023    4352 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0501 04:15:31.604023    4352 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 04:15:31.604161    4352 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0501 04:15:31.604161    4352 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0501 04:15:31.604161    4352 command_runner.go:130] > Access: 2024-05-01 04:15:31.494988090 +0000
	I0501 04:15:31.604161    4352 command_runner.go:130] > Modify: 2024-05-01 04:15:31.494988090 +0000
	I0501 04:15:31.604161    4352 command_runner.go:130] > Change: 2024-05-01 04:15:31.498988343 +0000
	I0501 04:15:31.604161    4352 command_runner.go:130] >  Birth: -
	I0501 04:15:31.604225    4352 start.go:562] Will wait 60s for crictl version
	I0501 04:15:31.618391    4352 ssh_runner.go:195] Run: which crictl
	I0501 04:15:31.623995    4352 command_runner.go:130] > /usr/bin/crictl
	I0501 04:15:31.637625    4352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 04:15:31.693291    4352 command_runner.go:130] > Version:  0.1.0
	I0501 04:15:31.693331    4352 command_runner.go:130] > RuntimeName:  docker
	I0501 04:15:31.693331    4352 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0501 04:15:31.693331    4352 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 04:15:31.693409    4352 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 04:15:31.704186    4352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 04:15:31.736665    4352 command_runner.go:130] > 26.0.2
	I0501 04:15:31.748202    4352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 04:15:31.778482    4352 command_runner.go:130] > 26.0.2
	I0501 04:15:31.782570    4352 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 04:15:31.782791    4352 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 04:15:31.787351    4352 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 04:15:31.787399    4352 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 04:15:31.787399    4352 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 04:15:31.787399    4352 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 04:15:31.790168    4352 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 04:15:31.790168    4352 ip.go:210] interface addr: 172.28.208.1/20
	I0501 04:15:31.802274    4352 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 04:15:31.809415    4352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:15:31.833544    4352 kubeadm.go:877] updating cluster {Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 04:15:31.833837    4352 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:15:31.845059    4352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 04:15:31.882700    4352 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 04:15:31.882700    4352 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 04:15:31.882700    4352 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0501 04:15:31.882700    4352 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0501 04:15:31.882700    4352 docker.go:615] Images already preloaded, skipping extraction
	I0501 04:15:31.893426    4352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 04:15:31.918580    4352 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 04:15:31.918580    4352 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0501 04:15:31.918580    4352 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 04:15:31.918618    4352 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 04:15:31.918618    4352 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 04:15:31.918618    4352 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0501 04:15:31.918661    4352 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0501 04:15:31.918744    4352 cache_images.go:84] Images are preloaded, skipping loading
	I0501 04:15:31.918744    4352 kubeadm.go:928] updating node { 172.28.209.199 8443 v1.30.0 docker true true} ...
	I0501 04:15:31.919004    4352 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-289800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.209.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 04:15:31.930473    4352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 04:15:31.963619    4352 command_runner.go:130] > cgroupfs
	I0501 04:15:31.963619    4352 cni.go:84] Creating CNI manager for ""
	I0501 04:15:31.963619    4352 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 04:15:31.963619    4352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 04:15:31.963619    4352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.209.199 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-289800 NodeName:multinode-289800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.209.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.209.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 04:15:31.963619    4352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.209.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-289800"
	  kubeletExtraArgs:
	    node-ip: 172.28.209.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.209.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 04:15:31.976533    4352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 04:15:31.996468    4352 command_runner.go:130] > kubeadm
	I0501 04:15:31.996468    4352 command_runner.go:130] > kubectl
	I0501 04:15:31.996468    4352 command_runner.go:130] > kubelet
	I0501 04:15:31.996468    4352 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 04:15:32.009112    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 04:15:32.026737    4352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 04:15:32.064689    4352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 04:15:32.098828    4352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0501 04:15:32.145922    4352 ssh_runner.go:195] Run: grep 172.28.209.199	control-plane.minikube.internal$ /etc/hosts
	I0501 04:15:32.153373    4352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.209.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:15:32.189011    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:32.395009    4352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:15:32.425286    4352 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800 for IP: 172.28.209.199
	I0501 04:15:32.425360    4352 certs.go:194] generating shared ca certs ...
	I0501 04:15:32.425433    4352 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:32.425976    4352 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 04:15:32.426507    4352 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 04:15:32.426791    4352 certs.go:256] generating profile certs ...
	I0501 04:15:32.427525    4352 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.key
	I0501 04:15:32.427573    4352 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272
	I0501 04:15:32.427767    4352 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.209.199]
	I0501 04:15:32.890331    4352 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272 ...
	I0501 04:15:32.890331    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272: {Name:mk21d7382a5c76e493cdcfee0142e55c7ff2d410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:32.892500    4352 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272 ...
	I0501 04:15:32.892500    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272: {Name:mk918e27e5b7cad139e8fb039a59b6bb3e7d585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:32.893061    4352 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt
	I0501 04:15:32.906738    4352 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key
	I0501 04:15:32.908375    4352 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key
	I0501 04:15:32.908375    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 04:15:32.909015    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 04:15:32.909069    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 04:15:32.909069    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 04:15:32.909069    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 04:15:32.909874    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 04:15:32.910225    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 04:15:32.910225    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 04:15:32.910824    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 04:15:32.911448    4352 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 04:15:32.911448    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 04:15:32.911448    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 04:15:32.912055    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 04:15:32.912055    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 04:15:32.912659    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 04:15:32.913395    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:32.913613    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 04:15:32.913745    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 04:15:32.915274    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 04:15:32.966578    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 04:15:33.016986    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 04:15:33.070060    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 04:15:33.120107    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 04:15:33.169536    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 04:15:33.218972    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 04:15:33.272477    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 04:15:33.322743    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 04:15:33.370278    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 04:15:33.430157    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 04:15:33.485494    4352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 04:15:33.549991    4352 ssh_runner.go:195] Run: openssl version
	I0501 04:15:33.558329    4352 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 04:15:33.571737    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 04:15:33.608330    4352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.615480    4352 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.615480    4352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.631646    4352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.644492    4352 command_runner.go:130] > 51391683
	I0501 04:15:33.658986    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 04:15:33.695998    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 04:15:33.733500    4352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.741187    4352 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.741187    4352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.754725    4352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.765376    4352 command_runner.go:130] > 3ec20f2e
	I0501 04:15:33.778281    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 04:15:33.818201    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 04:15:33.854991    4352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.865184    4352 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.865184    4352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.879144    4352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.888708    4352 command_runner.go:130] > b5213941
	I0501 04:15:33.901582    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 04:15:33.939426    4352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 04:15:33.949707    4352 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 04:15:33.949707    4352 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0501 04:15:33.949707    4352 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0501 04:15:33.949707    4352 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 04:15:33.949707    4352 command_runner.go:130] > Access: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.949904    4352 command_runner.go:130] > Modify: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.949904    4352 command_runner.go:130] > Change: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.949904    4352 command_runner.go:130] >  Birth: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.962727    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 04:15:33.974038    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:33.988289    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 04:15:33.998318    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.012568    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 04:15:34.023671    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.035938    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 04:15:34.046394    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.059796    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 04:15:34.069370    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.083300    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 04:15:34.094154    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.094636    4352 kubeadm.go:391] StartCluster: {Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:15:34.108882    4352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 04:15:34.149085    4352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 04:15:34.171011    4352 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0501 04:15:34.171087    4352 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0501 04:15:34.171087    4352 command_runner.go:130] > /var/lib/minikube/etcd:
	I0501 04:15:34.171087    4352 command_runner.go:130] > member
	W0501 04:15:34.171498    4352 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 04:15:34.171498    4352 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 04:15:34.171620    4352 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 04:15:34.185953    4352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 04:15:34.209623    4352 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 04:15:34.210556    4352 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-289800" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:15:34.211057    4352 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-289800" cluster setting kubeconfig missing "multinode-289800" context setting]
	I0501 04:15:34.211674    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:34.226243    4352 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:15:34.226770    4352 kapi.go:59] client config for multinode-289800: &rest.Config{Host:"https://172.28.209.199:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 04:15:34.228218    4352 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 04:15:34.240955    4352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 04:15:34.260952    4352 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0501 04:15:34.260952    4352 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0501 04:15:34.260952    4352 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0501 04:15:34.260952    4352 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0501 04:15:34.260952    4352 command_runner.go:130] >  kind: InitConfiguration
	I0501 04:15:34.260952    4352 command_runner.go:130] >  localAPIEndpoint:
	I0501 04:15:34.260952    4352 command_runner.go:130] > -  advertiseAddress: 172.28.209.152
	I0501 04:15:34.260952    4352 command_runner.go:130] > +  advertiseAddress: 172.28.209.199
	I0501 04:15:34.260952    4352 command_runner.go:130] >    bindPort: 8443
	I0501 04:15:34.260952    4352 command_runner.go:130] >  bootstrapTokens:
	I0501 04:15:34.260952    4352 command_runner.go:130] >    - groups:
	I0501 04:15:34.260952    4352 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0501 04:15:34.260952    4352 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0501 04:15:34.260952    4352 command_runner.go:130] >    name: "multinode-289800"
	I0501 04:15:34.260952    4352 command_runner.go:130] >    kubeletExtraArgs:
	I0501 04:15:34.260952    4352 command_runner.go:130] > -    node-ip: 172.28.209.152
	I0501 04:15:34.260952    4352 command_runner.go:130] > +    node-ip: 172.28.209.199
	I0501 04:15:34.260952    4352 command_runner.go:130] >    taints: []
	I0501 04:15:34.260952    4352 command_runner.go:130] >  ---
	I0501 04:15:34.260952    4352 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0501 04:15:34.260952    4352 command_runner.go:130] >  kind: ClusterConfiguration
	I0501 04:15:34.260952    4352 command_runner.go:130] >  apiServer:
	I0501 04:15:34.260952    4352 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.209.152"]
	I0501 04:15:34.260952    4352 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.209.199"]
	I0501 04:15:34.260952    4352 command_runner.go:130] >    extraArgs:
	I0501 04:15:34.260952    4352 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0501 04:15:34.260952    4352 command_runner.go:130] >  controllerManager:
	I0501 04:15:34.260952    4352 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.209.152
	+  advertiseAddress: 172.28.209.199
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-289800"
	   kubeletExtraArgs:
	-    node-ip: 172.28.209.152
	+    node-ip: 172.28.209.199
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.209.152"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.209.199"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0501 04:15:34.260952    4352 kubeadm.go:1154] stopping kube-system containers ...
	I0501 04:15:34.270992    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 04:15:34.300955    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:15:34.300955    4352 command_runner.go:130] > ee2238f98e35
	I0501 04:15:34.301961    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:15:34.301961    4352 command_runner.go:130] > baf9e690eb53
	I0501 04:15:34.301961    4352 command_runner.go:130] > 9971ef577f2f
	I0501 04:15:34.301961    4352 command_runner.go:130] > 9d509d032dc6
	I0501 04:15:34.301961    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:15:34.301961    4352 command_runner.go:130] > 502684407b0c
	I0501 04:15:34.301961    4352 command_runner.go:130] > 79bb6a06ed52
	I0501 04:15:34.301961    4352 command_runner.go:130] > 4df6ba73bcf6
	I0501 04:15:34.301961    4352 command_runner.go:130] > 3244d1ee5ab4
	I0501 04:15:34.301961    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:15:34.301961    4352 command_runner.go:130] > bbbe9bf27685
	I0501 04:15:34.301961    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:15:34.301961    4352 command_runner.go:130] > f72a1c5b5cdd
	I0501 04:15:34.301961    4352 command_runner.go:130] > 479b3ec741be
	I0501 04:15:34.301961    4352 command_runner.go:130] > 976a9ff433cc
	I0501 04:15:34.301961    4352 command_runner.go:130] > a338ea43bd9b
	I0501 04:15:34.306243    4352 docker.go:483] Stopping containers: [15c4496e3a9f ee2238f98e35 3e8d5ff9a9e4 baf9e690eb53 9971ef577f2f 9d509d032dc6 6d5f881ef398 502684407b0c 79bb6a06ed52 4df6ba73bcf6 3244d1ee5ab4 4b62556f40be bbbe9bf27685 06f1f84bfde1 f72a1c5b5cdd 479b3ec741be 976a9ff433cc a338ea43bd9b]
	I0501 04:15:34.318171    4352 ssh_runner.go:195] Run: docker stop 15c4496e3a9f ee2238f98e35 3e8d5ff9a9e4 baf9e690eb53 9971ef577f2f 9d509d032dc6 6d5f881ef398 502684407b0c 79bb6a06ed52 4df6ba73bcf6 3244d1ee5ab4 4b62556f40be bbbe9bf27685 06f1f84bfde1 f72a1c5b5cdd 479b3ec741be 976a9ff433cc a338ea43bd9b
	I0501 04:15:34.352777    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:15:34.352854    4352 command_runner.go:130] > ee2238f98e35
	I0501 04:15:34.352911    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:15:34.352911    4352 command_runner.go:130] > baf9e690eb53
	I0501 04:15:34.352911    4352 command_runner.go:130] > 9971ef577f2f
	I0501 04:15:34.352911    4352 command_runner.go:130] > 9d509d032dc6
	I0501 04:15:34.352911    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:15:34.352911    4352 command_runner.go:130] > 502684407b0c
	I0501 04:15:34.352911    4352 command_runner.go:130] > 79bb6a06ed52
	I0501 04:15:34.353018    4352 command_runner.go:130] > 4df6ba73bcf6
	I0501 04:15:34.353018    4352 command_runner.go:130] > 3244d1ee5ab4
	I0501 04:15:34.353168    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:15:34.353168    4352 command_runner.go:130] > bbbe9bf27685
	I0501 04:15:34.353168    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:15:34.353168    4352 command_runner.go:130] > f72a1c5b5cdd
	I0501 04:15:34.353168    4352 command_runner.go:130] > 479b3ec741be
	I0501 04:15:34.353168    4352 command_runner.go:130] > 976a9ff433cc
	I0501 04:15:34.353168    4352 command_runner.go:130] > a338ea43bd9b
	I0501 04:15:34.366922    4352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 04:15:34.411972    4352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 04:15:34.432098    4352 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 04:15:34.432098    4352 kubeadm.go:156] found existing configuration files:
	
	I0501 04:15:34.447151    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 04:15:34.466643    4352 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 04:15:34.467481    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 04:15:34.481495    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 04:15:34.514013    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 04:15:34.530843    4352 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 04:15:34.530843    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 04:15:34.543860    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 04:15:34.578503    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 04:15:34.597585    4352 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 04:15:34.598091    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 04:15:34.613522    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 04:15:34.647336    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 04:15:34.670140    4352 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 04:15:34.670817    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 04:15:34.687503    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 04:15:34.723592    4352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 04:15:34.746967    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:35.072697    4352 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 04:15:35.072885    4352 command_runner.go:130] > [certs] Using the existing "sa" key
	I0501 04:15:35.072885    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.392186    4352 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 04:15:36.392186    4352 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 04:15:36.392305    4352 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3193546s)
	I0501 04:15:36.392413    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.709077    4352 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 04:15:36.709077    4352 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 04:15:36.709077    4352 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0501 04:15:36.709077    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.808642    4352 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 04:15:36.808874    4352 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 04:15:36.818113    4352 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 04:15:36.819722    4352 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 04:15:36.831140    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.942295    4352 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 04:15:36.942295    4352 api_server.go:52] waiting for apiserver process to appear ...
	I0501 04:15:36.958675    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:37.460033    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:37.961693    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:38.470310    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:38.958889    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:38.983430    4352 command_runner.go:130] > 1873
	I0501 04:15:38.983546    4352 api_server.go:72] duration metric: took 2.041236s to wait for apiserver process to appear ...
	I0501 04:15:38.983615    4352 api_server.go:88] waiting for apiserver healthz status ...
	I0501 04:15:38.983669    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:42.390528    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 04:15:42.390528    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 04:15:42.390722    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:42.537044    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:42.537399    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:42.537399    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:42.546792    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:42.546792    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:42.993584    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:43.000750    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:43.001812    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:43.485992    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:43.510084    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:43.510605    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:43.995080    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:44.017664    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 200:
	ok
	I0501 04:15:44.018164    4352 round_trippers.go:463] GET https://172.28.209.199:8443/version
	I0501 04:15:44.018164    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:44.018164    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:44.018164    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:44.047730    4352 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0501 04:15:44.048196    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:44.048256    4352 round_trippers.go:580]     Audit-Id: 65811502-b9b2-4c06-a707-b36953dc64a0
	I0501 04:15:44.048256    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:44.048256    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:44.048319    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:44.048319    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:44.048319    4352 round_trippers.go:580]     Content-Length: 263
	I0501 04:15:44.048381    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:44 GMT
	I0501 04:15:44.048443    4352 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 04:15:44.048720    4352 api_server.go:141] control plane version: v1.30.0
	I0501 04:15:44.048770    4352 api_server.go:131] duration metric: took 5.0651163s to wait for apiserver health ...
	I0501 04:15:44.048829    4352 cni.go:84] Creating CNI manager for ""
	I0501 04:15:44.048901    4352 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 04:15:44.052194    4352 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 04:15:44.070532    4352 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 04:15:44.080313    4352 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0501 04:15:44.080313    4352 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0501 04:15:44.081300    4352 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0501 04:15:44.081369    4352 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 04:15:44.081413    4352 command_runner.go:130] > Access: 2024-05-01 04:14:09.889750900 +0000
	I0501 04:15:44.081478    4352 command_runner.go:130] > Modify: 2024-04-30 23:29:30.000000000 +0000
	I0501 04:15:44.081478    4352 command_runner.go:130] > Change: 2024-05-01 04:13:59.112000000 +0000
	I0501 04:15:44.081539    4352 command_runner.go:130] >  Birth: -
	I0501 04:15:44.081643    4352 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 04:15:44.081643    4352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 04:15:44.161304    4352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 04:15:45.347093    4352 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0501 04:15:45.347093    4352 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0501 04:15:45.347093    4352 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0501 04:15:45.347213    4352 command_runner.go:130] > daemonset.apps/kindnet configured
	I0501 04:15:45.347213    4352 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1859005s)
	I0501 04:15:45.347213    4352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 04:15:45.347213    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:15:45.347213    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.347213    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.347213    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.355046    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:15:45.355046    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Audit-Id: 6940c17b-b650-411a-a60f-d8e97978e311
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.355141    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.355141    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.356769    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 95624 chars]
	I0501 04:15:45.365322    4352 system_pods.go:59] 13 kube-system pods found
	I0501 04:15:45.365384    4352 system_pods.go:61] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 04:15:45.365384    4352 system_pods.go:61] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 04:15:45.365384    4352 system_pods.go:61] "etcd-multinode-289800" [aaf534b6-9f4c-445d-afb9-bd225e1a77fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kindnet-4m5vg" [4d06e665-b4c1-40b9-bbb8-c35bfe35385e] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kindnet-gzz7p" [576f33f3-f244-48f0-ae69-30c8f38ed871] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-apiserver-multinode-289800" [0ee77673-e4b3-4fba-a855-ef6876337257] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-proxy-g8mbm" [ef0e1817-6682-4b8f-affa-c10021247006] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-proxy-rlzp8" [b37d8d5d-a7cb-4848-a8a2-11d9761e08d6] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 04:15:45.365384    4352 system_pods.go:61] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 04:15:45.365384    4352 system_pods.go:74] duration metric: took 18.1702ms to wait for pod list to return data ...
	I0501 04:15:45.365384    4352 node_conditions.go:102] verifying NodePressure condition ...
	I0501 04:15:45.365384    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes
	I0501 04:15:45.365384    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.365384    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.365384    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.371080    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:45.371346    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.371437    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Audit-Id: a8607618-1cb7-49a0-9625-a62dfc1110fe
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.371512    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.371512    4352 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15631 chars]
	I0501 04:15:45.373004    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:15:45.373004    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:15:45.373004    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:15:45.373004    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:15:45.373004    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:15:45.373004    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:15:45.373004    4352 node_conditions.go:105] duration metric: took 7.6204ms to run NodePressure ...
	I0501 04:15:45.373004    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:45.706557    4352 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0501 04:15:45.852447    4352 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0501 04:15:45.855768    4352 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 04:15:45.855768    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0501 04:15:45.855768    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.855768    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.855768    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.866929    4352 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 04:15:45.867693    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Audit-Id: 9592acb9-5669-42e4-84dc-1773eaf73c9f
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.867753    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.867753    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.869064    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"aaf534b6-9f4c-445d-afb9-bd225e1a77fd","resourceVersion":"1787","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.199:2379","kubernetes.io/config.hash":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.mirror":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.seen":"2024-05-01T04:15:36.949387188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0501 04:15:45.871088    4352 kubeadm.go:733] kubelet initialised
	I0501 04:15:45.871088    4352 kubeadm.go:734] duration metric: took 15.3202ms waiting for restarted kubelet to initialise ...
	I0501 04:15:45.871222    4352 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:15:45.871442    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:15:45.871487    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.871511    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.871547    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.885917    4352 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 04:15:45.885917    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Audit-Id: 61cf7153-b41b-452a-9ba2-2f7e0629d0ad
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.885917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.885917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.890094    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 95031 chars]
	I0501 04:15:45.895872    4352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.895872    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:15:45.895872    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.895872    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.895872    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.901983    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:15:45.902972    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.903044    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.903044    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.903044    4352 round_trippers.go:580]     Audit-Id: bf95ab8e-2bae-4c71-a0e4-1f376042c3c0
	I0501 04:15:45.903106    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.903106    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.903106    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.903438    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:15:45.904479    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.904593    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.904593    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.904593    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.917439    4352 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 04:15:45.917439    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.917439    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.917439    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Audit-Id: cf6df549-6b21-4a3d-bf53-420b34db8dab
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.917439    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.919235    4352 pod_ready.go:97] node "multinode-289800" hosting pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.919293    4352 pod_ready.go:81] duration metric: took 23.4207ms for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.919354    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.919354    4352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.919517    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x9zrw
	I0501 04:15:45.919517    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.919564    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.919564    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.924414    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:45.924876    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.924876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.924955    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Audit-Id: 107f938a-1d74-461a-a7aa-f097a0122ac4
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.925383    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x9zrw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0b91b14d-bed3-4889-b193-db53daccd395","resourceVersion":"1804","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:15:45.926520    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.926520    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.926520    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.926520    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.929121    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:15:45.929472    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.929610    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.929610    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Audit-Id: cea7c015-3170-4a45-bd7a-803a8a130a3a
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.929610    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.930356    4352 pod_ready.go:97] node "multinode-289800" hosting pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.930356    4352 pod_ready.go:81] duration metric: took 11.0021ms for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.930356    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.930356    4352 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.930356    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-289800
	I0501 04:15:45.930356    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.930356    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.930356    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.933612    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:45.933794    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Audit-Id: 413f6909-b2ac-4fc4-b424-a3ac3e45a552
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.933872    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.933872    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.933872    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"aaf534b6-9f4c-445d-afb9-bd225e1a77fd","resourceVersion":"1787","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.199:2379","kubernetes.io/config.hash":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.mirror":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.seen":"2024-05-01T04:15:36.949387188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0501 04:15:45.934516    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.934516    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.934516    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.934516    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.939340    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:45.939626    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Audit-Id: 7dac56e8-effd-4264-9d01-23d90070529b
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.939626    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.939626    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.939626    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.940214    4352 pod_ready.go:97] node "multinode-289800" hosting pod "etcd-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.940214    4352 pod_ready.go:81] duration metric: took 9.858ms for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.940214    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "etcd-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.940214    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.940214    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-289800
	I0501 04:15:45.940214    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.940214    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.940214    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.945813    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:45.945968    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.945968    4352 round_trippers.go:580]     Audit-Id: 39b8e89d-8bc2-4446-a5ce-9d373ce72c55
	I0501 04:15:45.946046    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.946118    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.946164    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.946164    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.946164    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.946164    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-289800","namespace":"kube-system","uid":"0ee77673-e4b3-4fba-a855-ef6876337257","resourceVersion":"1791","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.209.199:8443","kubernetes.io/config.hash":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.mirror":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.seen":"2024-05-01T04:15:36.865099961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0501 04:15:45.946866    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.946866    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.946866    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.946866    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.958997    4352 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 04:15:45.958997    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.958997    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.958997    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Audit-Id: dbc478e9-cf5a-4ec2-907b-30470a458bf0
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.959603    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.959775    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-apiserver-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.959775    4352 pod_ready.go:81] duration metric: took 19.5604ms for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.959775    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-apiserver-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.959775    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:46.064844    4352 request.go:629] Waited for 104.8895ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 04:15:46.065149    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 04:15:46.065212    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.065212    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.065212    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.069365    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:46.069365    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.069365    4352 round_trippers.go:580]     Audit-Id: ae5979d5-574f-45ee-a467-bdef19e12134
	I0501 04:15:46.069365    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.069365    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.069365    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.069508    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.069508    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.069849    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-289800","namespace":"kube-system","uid":"fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318","resourceVersion":"1784","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.mirror":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.seen":"2024-05-01T03:52:15.688763845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0501 04:15:46.267422    4352 request.go:629] Waited for 196.4167ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.267660    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.267660    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.267660    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.267660    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.272484    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:46.273074    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.273074    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.273074    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Audit-Id: b8b332d9-d669-4b52-bbd6-3af87c591e23
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.273515    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:46.274223    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-controller-manager-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.274223    4352 pod_ready.go:81] duration metric: took 314.4458ms for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:46.274223    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-controller-manager-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.274299    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:46.471449    4352 request.go:629] Waited for 196.8989ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:15:46.471449    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:15:46.471449    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.471449    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.471449    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.479072    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:15:46.479272    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.479272    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Audit-Id: ed5a791f-e3a7-4eb0-a8ee-9e7a80d296ce
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.479340    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.479340    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bp9zx","generateName":"kube-proxy-","namespace":"kube-system","uid":"aba82e50-b8f8-40b4-b08a-6d045314d6b6","resourceVersion":"1834","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0501 04:15:46.660216    4352 request.go:629] Waited for 180.1ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.660484    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.660484    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.660484    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.660484    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.663207    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:15:46.663207    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.663207    4352 round_trippers.go:580]     Audit-Id: a74b4348-a089-479d-bf0b-155a827ff806
	I0501 04:15:46.663207    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.663207    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.664235    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.664235    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.664268    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.664594    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:46.664791    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-proxy-bp9zx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.664791    4352 pod_ready.go:81] duration metric: took 390.4882ms for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:46.664791    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-proxy-bp9zx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.664791    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:46.863953    4352 request.go:629] Waited for 198.4563ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:15:46.864151    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:15:46.864151    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.864151    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.864151    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.868962    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:46.868962    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.868962    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.869032    4352 round_trippers.go:580]     Audit-Id: 52e24c43-762c-483f-a27a-e54728c63ec2
	I0501 04:15:46.869032    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.869032    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.869032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.869032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.869262    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g8mbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef0e1817-6682-4b8f-affa-c10021247006","resourceVersion":"1723","creationTimestamp":"2024-05-01T04:00:13Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:00:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0501 04:15:47.066662    4352 request.go:629] Waited for 196.6777ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:15:47.066662    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:15:47.066662    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.066662    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.066662    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.070662    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:47.070662    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Audit-Id: 0b0340f7-6804-4106-bcb4-dcd8920a9124
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.070662    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.070662    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.070662    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m03","uid":"851df850-b222-4fa2-aca7-3694c4d89ab5","resourceVersion":"1732","creationTimestamp":"2024-05-01T04:11:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T04_11_04_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:11:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0501 04:15:47.071685    4352 pod_ready.go:97] node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:15:47.071685    4352 pod_ready.go:81] duration metric: took 406.8916ms for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:47.071685    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:15:47.071685    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:47.257375    4352 request.go:629] Waited for 185.5968ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:15:47.257530    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:15:47.257530    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.257530    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.257530    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.261290    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:47.261290    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.261290    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.262276    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.262276    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.262342    4352 round_trippers.go:580]     Audit-Id: b1d2e368-5383-4d88-ab26-16d4131cc9b2
	I0501 04:15:47.262342    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.262342    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.262342    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rlzp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b37d8d5d-a7cb-4848-a8a2-11d9761e08d6","resourceVersion":"596","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0501 04:15:47.459189    4352 request.go:629] Waited for 195.5087ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:15:47.459189    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:15:47.459189    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.459189    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.459189    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.462997    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:47.462997    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.462997    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.462997    4352 round_trippers.go:580]     Audit-Id: 10a094ce-fbc2-4fad-b9c0-f0a070ebd31b
	I0501 04:15:47.462997    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.463641    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.463641    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.463641    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.463753    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"1663","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0501 04:15:47.464467    4352 pod_ready.go:92] pod "kube-proxy-rlzp8" in "kube-system" namespace has status "Ready":"True"
	I0501 04:15:47.464467    4352 pod_ready.go:81] duration metric: took 392.7793ms for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:47.464467    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:47.661121    4352 request.go:629] Waited for 196.2048ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:15:47.661121    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:15:47.661400    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.661400    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.661400    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.666880    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:47.666880    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.666880    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.666880    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Audit-Id: 7788b5d8-b650-426b-8caa-c24dc9823280
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.666880    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-289800","namespace":"kube-system","uid":"c7518f03-993b-432f-b742-8805dd2167a7","resourceVersion":"1772","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.mirror":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.seen":"2024-05-01T03:52:15.688771544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0501 04:15:47.862396    4352 request.go:629] Waited for 194.4449ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:47.862506    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:47.862506    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.862643    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.862643    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.868378    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:47.868378    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.869283    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.869283    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Audit-Id: 30ec4f63-1c56-493b-a8d4-6e266d70a896
	I0501 04:15:47.870428    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:47.870428    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-scheduler-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:47.870953    4352 pod_ready.go:81] duration metric: took 406.4822ms for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:47.871166    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-scheduler-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:47.871166    4352 pod_ready.go:38] duration metric: took 1.999929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:15:47.871166    4352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 04:15:47.893189    4352 command_runner.go:130] > -16
	I0501 04:15:47.893300    4352 ops.go:34] apiserver oom_adj: -16
	I0501 04:15:47.893369    4352 kubeadm.go:591] duration metric: took 13.7215764s to restartPrimaryControlPlane
	I0501 04:15:47.893369    4352 kubeadm.go:393] duration metric: took 13.7986296s to StartCluster
	I0501 04:15:47.893452    4352 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:47.893677    4352 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:15:47.896371    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:47.897811    4352 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.209.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 04:15:47.901638    4352 out.go:177] * Verifying Kubernetes components...
	I0501 04:15:47.897811    4352 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 04:15:47.898391    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:15:47.907154    4352 out.go:177] * Enabled addons: 
	I0501 04:15:47.911297    4352 addons.go:505] duration metric: took 13.4858ms for enable addons: enabled=[]
	I0501 04:15:47.918412    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:48.243326    4352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:15:48.272616    4352 node_ready.go:35] waiting up to 6m0s for node "multinode-289800" to be "Ready" ...
	I0501 04:15:48.272973    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:48.272973    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:48.272973    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:48.272973    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:48.282282    4352 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 04:15:48.282480    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:48 GMT
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Audit-Id: 4a57aa77-8ec5-419d-84f3-816369f06b0e
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:48.282642    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:48.282642    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:48.282642    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:48.787265    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:48.787358    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:48.787358    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:48.787358    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:48.791855    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:48.791855    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:48.791855    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:48.791855    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:48 GMT
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Audit-Id: 47f87a71-63f7-4b6f-9f6b-13471684910e
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:48.792573    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:49.274535    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:49.274629    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:49.274629    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:49.274629    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:49.280040    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:49.280151    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:49.280151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:49.280151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:49 GMT
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Audit-Id: 1768b44c-f3a8-41df-8a61-60d9217fe7c4
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:49.280423    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:49.788682    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:49.788866    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:49.788866    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:49.788866    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:49.793227    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:49.794076    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:49 GMT
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Audit-Id: d96f125d-25ab-4f24-8139-8af631ccb4a9
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:49.794076    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:49.794076    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:49.794558    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:50.287503    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:50.287503    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:50.287503    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:50.287503    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:50.288034    4352 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0501 04:15:50.288034    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:50.288034    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:50.288034    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:50 GMT
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Audit-Id: 846b0fee-03aa-47b1-ad08-4de236adaa1e
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:50.288034    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:50.288034    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:50.778566    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:50.778566    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:50.778566    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:50.778566    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:50.783842    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:50.783842    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Audit-Id: 2f2a8f4d-8f94-4178-89e2-6d10a7e0adb7
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:50.783842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:50.783842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:50 GMT
	I0501 04:15:50.783842    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:51.281622    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:51.281703    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:51.281703    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:51.281703    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:51.286450    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:51.286978    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:51.286978    4352 round_trippers.go:580]     Audit-Id: 8c1d2f49-ea8b-496b-939c-772f4e5a9a02
	I0501 04:15:51.286978    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:51.286978    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:51.286978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:51.286978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:51.287053    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:51 GMT
	I0501 04:15:51.287194    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:51.782621    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:51.782621    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:51.782621    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:51.782621    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:51.786249    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:51.786249    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:51.786340    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:51 GMT
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Audit-Id: 8bbc6e7d-7882-4eea-9cef-3023dcab188b
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:51.786340    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:51.786732    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:52.283159    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:52.283159    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:52.283159    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:52.283159    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:52.286738    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:52.286738    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:52.286738    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:52.286738    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:52 GMT
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Audit-Id: 25fe7eae-b1d2-4a23-8d9b-f3d66437714f
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:52.287601    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:52.288076    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:52.785448    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:52.785763    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:52.785763    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:52.785763    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:52.790167    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:52.790404    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Audit-Id: b19c18ca-d186-45a6-a886-00c5b08576f9
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:52.790404    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:52.790404    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:52 GMT
	I0501 04:15:52.790737    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:53.286896    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:53.287010    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:53.287010    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:53.287010    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:53.291397    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:53.291622    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Audit-Id: c904bec9-9029-4fb8-a21a-181d712dfc3d
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:53.291622    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:53.291622    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:53 GMT
	I0501 04:15:53.291750    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:53.786591    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:53.786591    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:53.786591    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:53.786591    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:53.793750    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:15:53.793837    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Audit-Id: 5b2d1749-2b55-4728-9791-cbdb50184746
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:53.793863    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:53.793863    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:53 GMT
	I0501 04:15:53.793863    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:54.286294    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:54.286294    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:54.286294    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:54.286294    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:54.292917    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:15:54.293755    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:54.293755    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:54 GMT
	I0501 04:15:54.293755    4352 round_trippers.go:580]     Audit-Id: 2a2643e0-123b-49eb-bf48-a849730c99af
	I0501 04:15:54.293810    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:54.293810    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:54.293810    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:54.293810    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:54.295596    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:54.297738    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:54.784587    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:54.784587    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:54.784587    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:54.784587    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:54.788391    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:54.789473    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Audit-Id: 31896e28-ad42-44d7-bab5-1eea08d5c50f
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:54.789528    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:54.789528    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:54 GMT
	I0501 04:15:54.789772    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:55.286356    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:55.286356    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:55.286356    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:55.286356    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:55.290854    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:55.290854    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:55.290854    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:55.290854    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:55.291225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:55.291225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:55.291225    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:55 GMT
	I0501 04:15:55.291225    4352 round_trippers.go:580]     Audit-Id: a8a942bb-061f-4162-b25e-cd3d146f4a1f
	I0501 04:15:55.291397    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:55.778111    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:55.778111    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:55.778111    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:55.778111    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:55.784090    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:55.785102    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:55.785102    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:55.785102    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:55.785102    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:55.785102    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:55 GMT
	I0501 04:15:55.785102    4352 round_trippers.go:580]     Audit-Id: dcc40508-7d25-4230-b7ad-5c8ff24cec7e
	I0501 04:15:55.785166    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:55.785590    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:56.280270    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:56.280270    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:56.280270    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:56.280270    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:56.284475    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:56.284475    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:56.284475    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:56 GMT
	I0501 04:15:56.284475    4352 round_trippers.go:580]     Audit-Id: ef544fa9-9550-4e83-9281-5c270f5af74e
	I0501 04:15:56.284566    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:56.284566    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:56.284566    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:56.284566    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:56.284636    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:56.781859    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:56.781859    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:56.781859    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:56.781859    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:56.786560    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:56.786560    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:56 GMT
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Audit-Id: 80c38c85-3fad-4ae4-a83f-93fd20becab9
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:56.786693    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:56.786693    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:56.787378    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:56.788095    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:57.284544    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:57.284626    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:57.284626    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:57.284626    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:57.288535    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:57.289557    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:57.289557    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:57.289635    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:57.289635    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:57.289635    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:57 GMT
	I0501 04:15:57.289635    4352 round_trippers.go:580]     Audit-Id: 152017fc-7a4d-4e6b-b96a-ae3816222518
	I0501 04:15:57.289635    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:57.290083    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:57.785247    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:57.785247    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:57.785247    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:57.785247    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:57.788826    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:57.789259    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:57.789259    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:57.789259    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:57 GMT
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Audit-Id: 8ef1c498-eaf2-407d-b468-651d8f957ced
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:57.789448    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:58.284301    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:58.284432    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:58.284432    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:58.284432    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:58.287917    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:58.287917    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:58.288947    4352 round_trippers.go:580]     Audit-Id: 9ee0f450-23ad-416c-b0cb-12b23ea707af
	I0501 04:15:58.288947    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:58.289014    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:58.289014    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:58.289014    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:58.289014    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:58 GMT
	I0501 04:15:58.289365    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:58.782461    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:58.782461    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:58.782461    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:58.782461    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:58.785850    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:58.786943    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:58.786978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:58 GMT
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Audit-Id: 2361e6cd-3bbf-4c3a-bcac-644db6710a62
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:58.786978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:58.787370    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:59.280238    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:59.280298    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:59.280298    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:59.280298    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:59.283833    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:59.283833    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:59 GMT
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Audit-Id: 409797c8-6ed1-4a53-b286-69eaa7982225
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:59.283833    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:59.283833    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:59.284879    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:59.285630    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:59.781394    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:59.781394    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:59.781394    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:59.781394    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:59.787017    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:59.787017    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Audit-Id: 58129097-ccd1-4628-829d-c2723e9a96ef
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:59.787208    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:59.787208    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:59 GMT
	I0501 04:15:59.787512    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:00.282421    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:00.282421    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:00.282421    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:00.282421    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:00.286686    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:00.286686    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:00.286686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:00 GMT
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Audit-Id: bc24d01d-4737-4e59-99f1-811d1d5a37b0
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:00.286686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:00.286686    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:00.781873    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:00.781873    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:00.782100    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:00.782100    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:00.789095    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:00.789095    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Audit-Id: 580acb65-7c60-4bb9-b08a-2e2c0a282e83
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:00.789095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:00.789095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:00 GMT
	I0501 04:16:00.789095    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:01.281789    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:01.281893    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:01.281893    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:01.281893    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:01.286265    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:01.286265    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:01.286265    4352 round_trippers.go:580]     Audit-Id: 127a026d-aed3-4c32-8fe1-82ffe2f6142f
	I0501 04:16:01.286265    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:01.286265    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:01.286650    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:01.286650    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:01.286650    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:01 GMT
	I0501 04:16:01.286754    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:01.287299    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:01.777714    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:01.777798    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:01.777798    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:01.777798    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:01.781555    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:01.781854    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Audit-Id: f987c4c1-7de3-441c-858d-f0e0cd58f371
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:01.781854    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:01.781854    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:01 GMT
	I0501 04:16:01.782654    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:02.276440    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:02.276440    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:02.276440    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:02.276440    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:02.281296    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:02.281296    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:02.281296    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:02.281296    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:02.281777    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:02.281777    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:02 GMT
	I0501 04:16:02.281777    4352 round_trippers.go:580]     Audit-Id: d8112143-73ac-4f37-bdda-98e47db0572c
	I0501 04:16:02.281777    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:02.282345    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:02.774933    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:02.774933    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:02.774933    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:02.774933    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:02.778515    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:02.779403    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:02.779403    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:02.779403    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:02 GMT
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Audit-Id: b706864e-0b0f-45b3-b488-504163fe46bc
	I0501 04:16:02.779501    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:03.274107    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:03.274107    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:03.274341    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:03.274341    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:03.277851    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:03.277851    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Audit-Id: 18bf9435-3181-4b45-b60c-2432de6e8bbe
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:03.277851    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:03.277851    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:03 GMT
	I0501 04:16:03.278454    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:03.786776    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:03.786776    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:03.786776    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:03.786776    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:03.793984    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:16:03.793984    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:03.793984    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:03 GMT
	I0501 04:16:03.794266    4352 round_trippers.go:580]     Audit-Id: f1eddf3a-ad39-41d1-b323-72439e875600
	I0501 04:16:03.794266    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:03.794266    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:03.794266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:03.794266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:03.794629    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:03.795267    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:04.276874    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:04.276961    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:04.276961    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:04.276961    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:04.280505    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:04.280505    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:04.281430    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:04 GMT
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Audit-Id: 412a54b9-cf90-4b22-bc99-dbeeace3b317
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:04.281430    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:04.281660    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:04.774293    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:04.774293    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:04.774293    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:04.774293    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:04.778996    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:04.778996    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:04.778996    4352 round_trippers.go:580]     Audit-Id: 1cd2da53-f2e5-4851-9a6a-f918b798f49d
	I0501 04:16:04.778996    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:04.778996    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:04.779394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:04.779394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:04.779394    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:04 GMT
	I0501 04:16:04.779477    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:05.273712    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:05.273712    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:05.273712    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:05.273712    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:05.278375    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:05.279224    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:05.279224    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:05 GMT
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Audit-Id: 3b7ff919-c9c0-4f68-acf5-c0d8f117a3a7
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:05.279224    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:05.279479    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:05.787596    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:05.787596    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:05.787596    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:05.787596    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:05.791175    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:05.791876    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Audit-Id: 55509513-7a21-4389-99fb-4db955af6859
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:05.791876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:05.791876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:05 GMT
	I0501 04:16:05.792286    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:06.276021    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:06.276196    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:06.276196    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:06.276196    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:06.280606    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:06.281228    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:06.281228    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:06.281228    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:06.281228    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:06.281295    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:06 GMT
	I0501 04:16:06.281295    4352 round_trippers.go:580]     Audit-Id: 675ee840-2ea2-44fe-8a03-b42045e1f0e7
	I0501 04:16:06.281295    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:06.281586    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:06.282124    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:06.779197    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:06.779197    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:06.779197    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:06.779197    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:06.784019    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:06.784019    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:06 GMT
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Audit-Id: 81c075a2-638c-4696-b9ba-156b8f6b071f
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:06.784185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:06.784185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:06.784625    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:07.282365    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:07.282365    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:07.282365    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:07.282365    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:07.286287    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:07.286287    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Audit-Id: 527346f1-1644-4f27-a695-c38f3c37a301
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:07.286287    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:07.286287    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:07 GMT
	I0501 04:16:07.286287    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:07.781898    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:07.781898    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:07.782019    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:07.782019    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:07.786849    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:07.786849    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:07.786988    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:07.786988    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:07 GMT
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Audit-Id: 053c4546-2924-4540-ace3-7c91d714b209
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:07.787251    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:08.279999    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:08.280226    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:08.280226    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:08.280226    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:08.284744    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:08.284970    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:08.284970    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:08.284970    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:08 GMT
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Audit-Id: 9c101801-8335-4058-ae95-40cff99cbd5d
	I0501 04:16:08.285228    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:08.285826    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:08.780721    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:08.780721    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:08.780721    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:08.780721    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:08.784959    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:08.785185    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:08.785185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:08.785185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:08 GMT
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Audit-Id: 41b136d0-ba63-48f6-9150-3594e00186eb
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:08.785391    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:09.282934    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:09.282999    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:09.283056    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:09.283056    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:09.291637    4352 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 04:16:09.291637    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Audit-Id: 735d921c-c3c5-48c3-b57f-cec1a64b7da1
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:09.291637    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:09.291637    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:09 GMT
	I0501 04:16:09.292402    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:09.784415    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:09.784482    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:09.784482    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:09.784543    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:09.790985    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:09.791443    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Audit-Id: a1986b54-83fe-4122-bcf5-ed313aee165f
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:09.791443    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:09.791443    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:09 GMT
	I0501 04:16:09.791443    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:10.282583    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:10.282657    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:10.282657    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:10.282657    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:10.286598    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:10.286876    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Audit-Id: ecb65f41-7089-4d7b-bd9b-adfdde338412
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:10.286876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:10.286876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:10 GMT
	I0501 04:16:10.287189    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:10.287741    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:10.779992    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:10.779992    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:10.779992    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:10.779992    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:10.784609    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:10.784609    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:10.784609    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:10.784876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:10.784876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:10.784876    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:10 GMT
	I0501 04:16:10.784876    4352 round_trippers.go:580]     Audit-Id: 5e3b983d-4933-4159-a8ba-8c2f248e4d84
	I0501 04:16:10.784876    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:10.785314    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:11.282786    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:11.282786    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:11.282786    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:11.282786    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:11.286414    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:11.287118    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:11.287118    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:11.287118    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:11 GMT
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Audit-Id: 0ab35d6a-8bdd-4a51-b2cc-da1f70758e59
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:11.287331    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:11.781562    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:11.781562    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:11.781562    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:11.781562    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:11.785310    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:11.785310    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:11.785310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:11.785310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:11 GMT
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Audit-Id: b680cdcf-00e1-4931-bc8f-fd19ece8fce2
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:11.786869    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:12.279100    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:12.279100    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:12.279100    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:12.279100    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:12.284605    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:12.284605    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:12.284605    4352 round_trippers.go:580]     Audit-Id: 0bc560a5-5e5b-4137-899f-f2a011034f8f
	I0501 04:16:12.284605    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:12.284605    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:12.284605    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:12.285568    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:12.285633    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:12 GMT
	I0501 04:16:12.285808    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:12.777628    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:12.777628    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:12.777628    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:12.777628    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:12.781334    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:12.781334    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:12.781334    4352 round_trippers.go:580]     Audit-Id: 10ce18f9-4d86-4b6c-a244-e491bf165a3b
	I0501 04:16:12.781334    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:12.781896    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:12.781896    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:12.781896    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:12.781896    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:12 GMT
	I0501 04:16:12.782180    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:12.783217    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:13.277312    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:13.277312    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:13.277312    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:13.277312    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:13.281031    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:13.281723    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:13.281723    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:13.281723    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:13 GMT
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Audit-Id: 6ed090f9-f02b-46b1-8ced-7eb682fa8f03
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:13.281991    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:13.778150    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:13.778150    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:13.778150    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:13.778150    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:13.781820    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:13.781820    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:13 GMT
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Audit-Id: 226d9499-ed9a-49f4-95d1-c4264b7da82b
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:13.782629    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:13.782629    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:13.782669    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:14.275454    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:14.275454    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:14.275454    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:14.275454    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:14.280017    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:14.280017    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Audit-Id: f10613b2-0dff-4f93-8436-c0ffdd5ab9f2
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:14.280017    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:14.280017    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:14 GMT
	I0501 04:16:14.281082    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:14.779083    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:14.779083    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:14.779083    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:14.779083    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:14.782926    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:14.783782    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Audit-Id: a5bfde4f-088d-409f-89dc-4199d535b4ee
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:14.783782    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:14.783782    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:14 GMT
	I0501 04:16:14.784653    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:14.784798    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:15.280754    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:15.280994    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:15.281067    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:15.281067    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:15.284922    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:15.285331    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:15 GMT
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Audit-Id: 514cdae5-f5a2-4a39-80a6-e9c01a302d0c
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:15.285331    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:15.285331    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:15.285563    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:15.779647    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:15.779647    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:15.779647    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:15.779647    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:15.782044    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:15.782883    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:15.782883    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:15 GMT
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Audit-Id: 293525a7-880e-449d-b825-0626fc8e39ac
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:15.782883    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:15.783202    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:16.283269    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:16.283348    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.283348    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.283348    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.287675    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:16.288157    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.288157    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.288157    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Audit-Id: 81a7a62c-92a3-490d-823a-f088fd1db0ae
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.288332    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:16.787615    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:16.787844    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.787844    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.787844    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.792509    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:16.792509    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.792581    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.792581    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Audit-Id: a8df7ba9-9350-4087-9759-fb183f00b90d
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.793361    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:16.793939    4352 node_ready.go:49] node "multinode-289800" has status "Ready":"True"
	I0501 04:16:16.794032    4352 node_ready.go:38] duration metric: took 28.5209417s for node "multinode-289800" to be "Ready" ...
	I0501 04:16:16.794032    4352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:16:16.794182    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:16:16.794182    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.794259    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.794259    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.799522    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:16.799522    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.799522    4352 round_trippers.go:580]     Audit-Id: 050004ab-c4e9-41e4-883e-0bd4c079851f
	I0501 04:16:16.799522    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.799522    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.799699    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.799699    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.799699    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.801485    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 94470 chars]
	I0501 04:16:16.806197    4352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:16.806390    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:16.806390    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.806390    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.806390    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.809394    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:16.809394    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Audit-Id: d260c6af-6d95-4e48-9d52-91351dfb04be
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.809394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.809394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.809836    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:16.810074    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:16.810074    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.810074    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.810074    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.812673    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:16.812673    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Audit-Id: c4b374c2-cf45-43e5-9ef0-306e176eb3a7
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.812673    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.812673    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.813955    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:17.321570    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:17.321570    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.321570    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.321570    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.326033    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:17.326033    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.326033    4352 round_trippers.go:580]     Audit-Id: e96b80cd-3246-46b2-a271-bc2d14e84fd0
	I0501 04:16:17.326033    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.326033    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.326033    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.326033    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.326209    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.326483    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:17.327085    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:17.327085    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.327085    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.327085    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.329700    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:17.329700    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Audit-Id: 81cab6cc-4d87-423d-a191-a4ca9c77fc54
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.329700    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.329700    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.330792    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:17.821612    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:17.821612    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.821612    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.821612    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.826237    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:17.826237    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.826237    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Audit-Id: 7f10bf3d-33e1-4f53-8d5e-33711ac1d613
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.826237    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.827351    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:17.828614    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:17.828738    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.828738    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.828738    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.830982    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:17.830982    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Audit-Id: aadd4245-2a21-4202-8516-976397b3fb2d
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.830982    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.830982    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.832082    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:18.319020    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:18.319260    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.319260    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.319260    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.323644    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:18.324106    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.324106    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.324106    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Audit-Id: 75971dd8-801d-459d-88d7-dd2aeb442a01
	I0501 04:16:18.324624    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:18.325682    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:18.325682    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.325682    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.325682    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.327924    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:18.327924    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.327924    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Audit-Id: 2b56d2a4-2c48-447f-9704-3c924da491b7
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.328396    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.328672    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:18.818530    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:18.818638    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.818638    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.818638    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.823046    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:18.823253    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Audit-Id: 3fd87977-3509-4dda-acbf-7ae284dd4856
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.823253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.823253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.823426    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:18.824692    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:18.824780    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.824780    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.824904    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.827496    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:18.827496    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.827496    4352 round_trippers.go:580]     Audit-Id: dd40c323-2562-48e8-8197-4664adbd4df8
	I0501 04:16:18.827842    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.827842    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.827842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.827842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.827842    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.828056    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:18.828563    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:19.317101    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:19.317101    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.317101    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.317101    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.321854    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:19.321854    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.321854    4352 round_trippers.go:580]     Audit-Id: 7e27b50d-a553-4d82-b13e-1d7740c9eed7
	I0501 04:16:19.321854    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.321854    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.321854    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.321954    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.321954    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.322411    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:19.323130    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:19.323130    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.323130    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.323130    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.325696    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:19.326234    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.326234    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.326234    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.326234    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.326234    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.326234    4352 round_trippers.go:580]     Audit-Id: c87f3c5c-d4d9-466b-80a2-7d8b78d44be6
	I0501 04:16:19.326310    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.326517    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:19.817915    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:19.817915    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.818027    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.818027    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.822406    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:19.822867    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Audit-Id: 62b3dbf4-22ca-4aa7-b136-2dcdf632f3ac
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.822867    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.822867    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.823142    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:19.823909    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:19.823970    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.823970    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.823970    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.827726    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:19.827726    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.827808    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Audit-Id: 1b11db93-1fae-44c9-913c-3571294123e7
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.827808    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.828062    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:20.316534    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:20.316599    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.316599    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.316599    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.321644    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:20.321644    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.321644    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Audit-Id: d0b99343-4d7d-48c1-8987-c23e631244d1
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.321644    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.321644    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:20.322670    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:20.322774    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.322774    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.322858    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.324540    4352 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 04:16:20.324540    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Audit-Id: 6d3fb35a-de54-4e1f-9490-c6e0afd9174c
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.324540    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.324540    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.325987    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:20.820332    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:20.820423    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.820493    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.820493    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.825309    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:20.825562    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Audit-Id: 7236753d-bc54-43e1-83d6-9db984fee0b8
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.825562    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.825562    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.826234    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:20.827333    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:20.827333    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.827333    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.827392    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.832053    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:20.832221    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.832221    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.832282    4352 round_trippers.go:580]     Audit-Id: 360afb6c-cdbb-41cb-8d3f-68a66a5a75c5
	I0501 04:16:20.832282    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.832308    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.832409    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.832409    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.833010    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:20.833503    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:21.309470    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:21.309470    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.309594    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.309594    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.314012    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:21.314012    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.314012    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.314012    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Audit-Id: bc524f9c-a27a-4d3a-bde8-9beaa844ba38
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.315004    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:21.315761    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:21.315761    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.315761    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.315761    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.318991    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:21.318991    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.318991    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.318991    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Audit-Id: 91a4bbf4-5e2a-49f6-838b-46a40d8c7bfc
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.319349    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:21.818554    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:21.818645    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.818645    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.818747    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.822201    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:21.822201    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Audit-Id: 2b50f61b-33da-43a2-b888-a27e780c5ba7
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.822201    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.822201    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.823601    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:21.825197    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:21.825311    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.825311    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.825311    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.829579    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:21.829729    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.829729    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.829729    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Audit-Id: 9684aa08-a870-415d-9ae6-119943b415f9
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.830086    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:22.316747    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:22.316830    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.316830    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.316830    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.321248    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:22.321248    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Audit-Id: bd531d37-72d7-44a5-bd85-532f059df449
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.322032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.322032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.322269    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:22.323089    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:22.323089    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.323089    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.323089    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.326457    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:22.326457    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.326457    4352 round_trippers.go:580]     Audit-Id: 989e17c6-1130-4bf2-a662-763c489be260
	I0501 04:16:22.326457    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.326457    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.326457    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.326942    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.326942    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.327235    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:22.817837    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:22.817837    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.817837    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.817837    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.822494    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:22.822494    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.823021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.823021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Audit-Id: f05c0b08-a050-4049-a246-4cfe172b7f57
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.823261    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:22.823923    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:22.824009    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.824044    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.824044    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.830075    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:22.830075    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Audit-Id: e2984fb0-6f1a-4dea-8864-65a0d8e7387f
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.830075    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.830075    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.830904    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:23.316589    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:23.316589    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.316589    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.316589    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.321200    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:23.321200    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Audit-Id: e31ed3e4-6009-484f-8cb0-428db34a53b7
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.321200    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.321200    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.321200    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:23.321200    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:23.321200    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.322235    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.322235    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.324292    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:23.324917    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.324917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.324917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Audit-Id: 283d4092-6d88-4fa0-be50-453173e356b9
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.324917    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:23.325689    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:23.817186    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:23.817186    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.817186    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.817186    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.820839    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:23.820839    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.820839    4352 round_trippers.go:580]     Audit-Id: 0901036d-adcd-42a5-be63-926fac058393
	I0501 04:16:23.820839    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.820839    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.820839    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.820839    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.821820    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.822052    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:23.822855    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:23.822946    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.822946    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.822946    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.825849    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:23.825849    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.825849    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.825849    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.825849    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.825849    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.826023    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.826023    4352 round_trippers.go:580]     Audit-Id: a9379409-b041-43c4-bc4c-b7b45eb4c291
	I0501 04:16:23.826384    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:24.319157    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:24.319157    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.319157    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.319157    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.323877    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:24.323877    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Audit-Id: 81285df0-dac6-42b0-af38-69145c972490
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.323877    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.323877    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.324434    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:24.325607    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:24.325698    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.325698    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.325698    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.331084    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:24.331084    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.331084    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.331084    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Audit-Id: a70680e3-a313-4bf9-879f-624497c3c30e
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.331084    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:24.818740    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:24.818740    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.818740    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.818740    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.823254    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:24.823254    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.823254    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.823254    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.823254    4352 round_trippers.go:580]     Audit-Id: b778dd97-8699-43d7-89c5-e24aa8a65a07
	I0501 04:16:24.823254    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.823741    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.823741    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.823923    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:24.824668    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:24.824668    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.824668    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.824668    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.827979    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:24.827979    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.827979    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.827979    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.827979    4352 round_trippers.go:580]     Audit-Id: 834cfe03-11e1-498f-a1e1-cf2da60cd7b6
	I0501 04:16:24.828162    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.828162    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.828162    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.828595    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:25.313778    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:25.313778    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.313778    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.313778    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.317405    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:25.318424    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.318424    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.318424    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Audit-Id: d7b1b757-2128-457d-952c-0a043f9d172f
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.318655    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:25.319674    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:25.319754    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.319754    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.319754    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.323023    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:25.323023    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.323023    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.323023    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Audit-Id: a727c28e-0a2a-4f75-a39b-38db5a7147ef
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.323564    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:25.811432    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:25.811513    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.811513    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.811513    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.815503    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:25.815503    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.815503    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.815503    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Audit-Id: ebc60e7e-206d-4b9c-b3e5-308ae679b33d
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.815772    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:25.816351    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:25.816509    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.816509    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.816509    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.818761    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:25.819255    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.819255    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Audit-Id: 083ee0c2-1a4e-49dd-a15b-72018a6364ce
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.819255    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.819255    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:25.820137    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:26.314159    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:26.314159    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.314159    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.314159    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.318901    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:26.319002    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Audit-Id: 6155ae63-0951-4263-8b61-926605eb8751
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.319002    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.319002    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.319338    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:26.320050    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:26.320130    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.320130    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.320130    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.322386    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:26.322386    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Audit-Id: fa069478-4099-4084-9493-3c0cb128ba57
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.322386    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.322386    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.323623    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:26.814772    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:26.814772    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.814901    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.814901    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.818397    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:26.819266    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Audit-Id: 97e310c6-47bd-407f-aa4b-1e1292313dd3
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.819266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.819266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.819607    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:26.820918    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:26.820918    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.820918    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.820918    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.825448    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:26.825544    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Audit-Id: 37f0a0f3-37f6-43bc-953c-2560e8523f51
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.825544    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.825544    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.825912    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:27.313128    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:27.313227    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.313227    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.313227    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.317548    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:27.317649    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Audit-Id: f0a20432-f5b4-4b84-8a80-14530e7d80e7
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.317649    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.317649    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.317932    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:27.318817    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:27.318817    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.318817    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.318817    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.328230    4352 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 04:16:27.328230    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.328230    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.328230    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Audit-Id: b4308ccc-0003-4194-a5b8-7412f1cac1f0
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.329251    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:27.813953    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:27.814066    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.814066    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.814066    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.818512    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:27.819041    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.819041    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Audit-Id: e50c69f8-ce3c-4e3d-8acd-4950a65f682b
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.819041    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.819281    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:27.819878    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:27.819878    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.819878    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.819878    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.823068    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:27.823336    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Audit-Id: d818f2ab-838b-49c5-842c-d9fe922d6d76
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.823423    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.823423    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.823751    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:27.824393    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:28.310861    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:28.310861    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.310861    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.310947    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.314248    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:28.314248    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Audit-Id: 9986cd0b-09e8-410c-bcb6-057ad45cee9d
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.314248    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.314248    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.315078    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:28.315849    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:28.315907    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.315907    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.315907    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.318816    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:28.318886    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Audit-Id: ccdda6e2-5208-4eee-8f8d-16441b853e0e
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.318886    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.318886    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.319398    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:28.812245    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:28.812370    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.812370    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.812370    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.817262    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:28.817536    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.817536    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.817536    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Audit-Id: 9d906dfe-1f9e-44fc-b517-cfd18e18f34f
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.818478    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:28.819006    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:28.819006    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.819006    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.819006    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.821610    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:28.821610    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.821610    4352 round_trippers.go:580]     Audit-Id: 81c2a6a1-5aa0-42c3-b12f-a9b1ba482460
	I0501 04:16:28.822670    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.822732    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.822774    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.822774    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.822774    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.823100    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:29.313081    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:29.313081    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.313081    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.313081    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.318095    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:29.318095    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.318095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Audit-Id: 3d4fce96-bab5-43f8-9a3c-4a9bd918cd83
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.318095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.318095    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:29.319232    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:29.319232    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.319293    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.319293    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.333692    4352 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 04:16:29.333692    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Audit-Id: c00fe54a-08de-4a55-bb1d-beba5ea0bb34
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.333692    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.333692    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.334132    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:29.811929    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:29.812038    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.812038    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.812038    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.816220    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:29.816220    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.816220    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Audit-Id: dd1f9929-d6d6-4aee-b394-03b8c7136961
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.816220    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.817087    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:29.817789    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:29.818352    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.818352    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.818352    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.822112    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:29.822112    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Audit-Id: 4f7a9b1b-79a0-41e9-bf6e-a0096656d4d7
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.822112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.822112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.822352    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:30.306888    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:30.306888    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.307025    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.307025    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.310450    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:30.310884    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Audit-Id: 573fcf2e-208c-4a0d-8d79-7dd435a2e58b
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.310884    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.310884    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.310884    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:30.311752    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:30.311752    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.311752    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.311752    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.314341    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:30.314341    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.315151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.315151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Audit-Id: 27ce241c-6ddc-4ad0-9c6c-36bced236f74
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.315430    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:30.315795    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:30.815525    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:30.815749    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.815749    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.815749    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.819957    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:30.819957    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.819957    4352 round_trippers.go:580]     Audit-Id: be06eea0-bfdd-45e0-b3f6-df8bd6e26364
	I0501 04:16:30.819957    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.820547    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.820547    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.820547    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.820547    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.820763    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:30.821511    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:30.821576    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.821576    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.821576    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.823734    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:30.823734    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.823734    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.823734    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Audit-Id: 1c2077ae-ea8e-4e17-bb2a-3e60ef1cae35
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.824624    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:31.312798    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:31.312798    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.312915    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.312915    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.317375    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:31.317723    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Audit-Id: 5a638a9d-67cd-4a80-819b-166467a4b708
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.317783    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.317783    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.317783    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:31.319065    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:31.319153    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.319153    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.319153    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.322779    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:31.322779    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.322989    4352 round_trippers.go:580]     Audit-Id: c79c82d7-e839-42ac-8b7a-afc2771d7144
	I0501 04:16:31.323148    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.323215    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.323215    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.323215    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.323215    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.323215    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:31.810660    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:31.810660    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.810740    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.810740    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.816076    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:31.816076    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Audit-Id: b7d23653-018b-41af-a27c-18d0a21ea855
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.816323    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.816323    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.816454    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:31.817213    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:31.817213    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.817213    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.817213    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.819563    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:31.819563    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.819563    4352 round_trippers.go:580]     Audit-Id: 969de7f0-62de-427a-81cb-00aa4bc2a125
	I0501 04:16:31.819563    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.819563    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.819563    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.820391    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.820391    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.820707    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:32.309679    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:32.309900    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.309900    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.309900    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.313252    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:32.314134    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Audit-Id: 4cb96fc0-d7f4-4cf4-922a-be282b23755e
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.314134    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.314134    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.314393    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:32.315125    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:32.315125    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.315125    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.315125    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.317978    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:32.318510    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.318510    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.318510    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Audit-Id: 98c6b91e-4d13-4caf-b4ff-9e28ac69c82f
	I0501 04:16:32.318510    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:32.319285    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:32.808649    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:32.808649    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.808649    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.808649    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.811244    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:32.811244    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.811244    4352 round_trippers.go:580]     Audit-Id: 23e0bd42-42c3-4abd-acfa-0ade72ff458a
	I0501 04:16:32.811244    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.812258    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.812258    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.812258    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.812258    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.812454    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:32.813325    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:32.813325    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.813325    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.813325    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.816066    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:32.816066    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.816066    4352 round_trippers.go:580]     Audit-Id: 4220837a-f802-471f-9909-fc23a4dcb1d8
	I0501 04:16:32.816066    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.816066    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.816066    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.816066    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.816970    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.817237    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:33.307202    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:33.307202    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.307202    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.307425    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.311293    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:33.311532    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.311532    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.311532    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.311532    4352 round_trippers.go:580]     Audit-Id: b1bacc20-cd02-4561-b2ff-bec3c53496a0
	I0501 04:16:33.311611    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.311611    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.311611    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.311803    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:33.312605    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:33.312605    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.312605    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.312682    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.314336    4352 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 04:16:33.315143    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.315143    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.315210    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.315210    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.315210    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.315210    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.315210    4352 round_trippers.go:580]     Audit-Id: 06295fb7-9662-46f1-b0de-dd0404d5f802
	I0501 04:16:33.315678    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:33.820257    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:33.820257    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.820257    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.820257    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.823821    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:33.823821    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.823821    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.823821    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.823821    4352 round_trippers.go:580]     Audit-Id: f06ef77f-f188-4e42-a1ed-0433c4bdc5d4
	I0501 04:16:33.823821    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.824912    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.824912    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.825732    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:33.825893    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:33.825893    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.825893    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.825893    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.829697    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:33.829697    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.829697    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.829697    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.829697    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.830195    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.830195    4352 round_trippers.go:580]     Audit-Id: 7c3e4a49-a523-4783-9551-453d8888aa4e
	I0501 04:16:33.830195    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.830258    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:34.320389    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:34.320619    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.320619    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.320619    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.323893    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:34.324583    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.324583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.324583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Audit-Id: d9ec8939-2ee9-45e1-83f9-b16aa96e9726
	I0501 04:16:34.324855    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:34.325519    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:34.325519    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.325519    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.325519    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.329186    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:34.329186    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.329186    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.329186    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Audit-Id: fb013243-8426-4315-a9cb-0dde6493d16c
	I0501 04:16:34.329186    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:34.329814    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:34.820408    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:34.820408    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.820408    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.820408    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.824819    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:34.824819    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Audit-Id: 58dd9c85-13b3-48e1-a6ab-02566d767ab0
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.825317    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.825317    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.825507    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:34.826342    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:34.826342    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.826342    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.826342    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.829594    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:34.829594    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Audit-Id: 4c288c98-57f1-488b-a370-4af881430ca8
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.829594    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.829594    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.830723    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:35.307080    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:35.307080    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.307370    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.307370    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.312613    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:35.312613    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.312893    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.312893    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Audit-Id: 18383b8d-b903-47ea-aa0a-6481b861c5fe
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.313143    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:35.313983    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:35.313983    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.313983    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.313983    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.316962    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:35.317260    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Audit-Id: b87773c6-17de-4dc4-9f96-1d8e9ffdff64
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.317260    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.317260    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.318272    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:35.820933    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:35.820933    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.820933    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.820933    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.824802    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:35.824802    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Audit-Id: 1b38b88e-0295-4a87-b7eb-e3dc709abb80
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.825812    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.825812    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.826149    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:35.826461    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:35.826461    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.826461    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.826461    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.831246    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:35.831246    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.831246    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Audit-Id: 447a3977-f969-494f-8de2-0f19cc116af2
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.831246    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.831966    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:36.307267    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:36.307442    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.307442    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.307442    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.312159    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:36.312159    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.312159    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.312159    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Audit-Id: 2c7f2e9f-891e-44c6-84c8-15d6ab08f4d5
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.312159    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:36.313513    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:36.313513    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.313578    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.313578    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.316314    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:36.316314    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.316314    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.316314    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Audit-Id: b66202a8-fae1-47a4-a9dd-67a5872b5a63
	I0501 04:16:36.317462    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:36.810561    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:36.810683    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.810683    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.810683    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.817590    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:36.817590    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.817665    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.817665    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Audit-Id: c01ada7e-5e8c-41fa-839e-9883969bf6c4
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.818212    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:36.819170    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:36.819170    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.819170    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.819170    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.824401    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:36.824401    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.824401    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Audit-Id: 2d981cab-3c10-4df2-9a3d-44873e837195
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.824401    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.825832    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:36.825867    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:37.312365    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:37.312635    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.312635    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.312635    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.316962    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:37.317253    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Audit-Id: 27e287d8-df29-4bcb-874d-59dd127f1e1c
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.317253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.317253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.317484    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:37.318195    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:37.318195    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.318195    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.318195    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.323974    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:37.323974    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.323974    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.323974    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.323974    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.324128    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.324128    4352 round_trippers.go:580]     Audit-Id: 0de19efb-b341-4ea6-b483-dcda9d658a0f
	I0501 04:16:37.324128    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.324885    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:37.815618    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:37.815618    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.815739    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.815739    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.821154    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:37.821154    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.821154    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.821154    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.821154    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.821154    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.821154    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.821670    4352 round_trippers.go:580]     Audit-Id: 6fd6ae4b-bc8e-4bcc-82bb-850115e1fbd8
	I0501 04:16:37.821979    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:37.822740    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:37.822818    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.822818    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.822818    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.825565    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:37.825565    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.825565    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.825565    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Audit-Id: 78890d41-1331-4d5c-bcd2-561fd0335438
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.826550    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:38.315480    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:38.315814    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.315814    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.315814    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.319859    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:38.319859    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Audit-Id: 013133a8-bce9-4944-b43f-60d4a32d9cd6
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.319859    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.319859    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.321292    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:38.321999    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:38.322060    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.322060    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.322060    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.324758    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:38.324758    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Audit-Id: b13f298f-dd89-48d8-8f25-35814342a5b7
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.324758    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.324758    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.325476    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:38.815382    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:38.815382    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.815382    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.815467    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.819174    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:38.819174    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Audit-Id: 739af986-8e7a-412c-ae36-8d0c22198a26
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.819174    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.819174    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.820494    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:38.821336    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:38.821450    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.821450    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.821450    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.826874    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:38.826874    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.826874    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Audit-Id: 10634919-4852-4be2-aa4e-ef82afd68924
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.826874    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.826874    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:38.827615    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:39.315177    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:39.315177    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.315177    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.315177    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.319879    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:39.319879    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.319879    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.319879    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.319879    4352 round_trippers.go:580]     Audit-Id: 35c9e661-a6f7-4817-9714-70ab2e75b894
	I0501 04:16:39.320767    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.320767    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.320767    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.321008    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:39.321885    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:39.321885    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.321885    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.321885    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.325248    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:39.325248    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.325248    4352 round_trippers.go:580]     Audit-Id: 08640a8a-a836-401e-b9a4-48ab4ddb050e
	I0501 04:16:39.325248    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.325721    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.325721    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.325721    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.325721    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.325791    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:39.815850    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:39.816005    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.816005    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.816005    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.821462    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:39.821462    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.821462    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.821462    4352 round_trippers.go:580]     Audit-Id: 0bd931a6-6870-424e-9a62-342e10f92b01
	I0501 04:16:39.821462    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.822458    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.822458    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.822481    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.823098    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:39.824385    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:39.824385    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.824385    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.824458    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.827261    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:39.827261    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.827261    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.827261    4352 round_trippers.go:580]     Audit-Id: ed84fb05-7161-4a46-8b48-db2af233d62d
	I0501 04:16:39.827261    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.827667    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.827667    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.827667    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.828048    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:40.314215    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:40.314324    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.314324    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.314324    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.318700    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:40.318784    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.318784    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.318784    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Audit-Id: 098ff21d-9149-4af3-a15f-e01c4b362553
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.318971    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:40.319686    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:40.319686    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.319686    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.319686    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.322867    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:40.323838    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.323838    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.323909    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.323909    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.323909    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.323909    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.323909    4352 round_trippers.go:580]     Audit-Id: 9754cb84-13ee-4b31-8041-801f08cd591d
	I0501 04:16:40.324176    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:40.813790    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:40.813790    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.813790    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.813790    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.819977    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:40.819977    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.819977    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.819977    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.819977    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.819977    4352 round_trippers.go:580]     Audit-Id: 63767237-36ee-4e9a-a476-31383824a40c
	I0501 04:16:40.819977    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.820579    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.821357    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:40.822348    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:40.822348    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.822348    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.822348    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.826243    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:40.826243    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Audit-Id: 5cb431a9-4faa-4c61-9777-c21172a8876d
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.826243    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.826243    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.826243    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:41.308554    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:41.308554    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.308554    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.308554    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.313203    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:41.313357    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Audit-Id: 85d0f24d-e02a-4579-80f1-d0622cb1437c
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.313357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.313357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.313533    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:41.314156    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:41.314337    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.314337    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.314337    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.317487    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:41.317689    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Audit-Id: 59cc71cd-ea44-4838-a1fd-9950669ff826
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.317689    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.317689    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.317823    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:41.318592    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:41.808173    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:41.808322    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.808464    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.808464    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.815045    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:41.815112    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.815112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.815112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Audit-Id: 52ea6c9c-008b-4934-9b48-a5c1f3687391
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.815414    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:41.816084    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:41.816084    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.816084    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.816084    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.819697    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:41.819697    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.819697    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.819697    4352 round_trippers.go:580]     Audit-Id: c5d4e948-4cb1-4617-9286-d4f30655d689
	I0501 04:16:41.819697    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.819881    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.819881    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.819881    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.820104    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:42.321866    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:42.321866    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.321866    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.321866    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.326244    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:42.326244    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.326244    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.326244    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Audit-Id: 0e8063e0-0aa4-4965-970a-b0b4c167ede3
	I0501 04:16:42.326244    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:42.327345    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:42.327345    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.327345    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.327345    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.330059    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:42.330059    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.330059    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.330059    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Audit-Id: 7d52bd0c-3841-4d55-9a03-f8431dcca877
	I0501 04:16:42.330059    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:42.820689    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:42.820689    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.820689    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.820689    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.825639    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:42.825639    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Audit-Id: 3002f998-f3c1-433a-8032-bbd621a3f77e
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.825639    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.825639    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.825639    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:42.826639    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:42.827163    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.827163    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.827403    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.831087    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:42.831087    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.831087    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Audit-Id: 513691f1-cdec-4433-9a9d-b7f8f3be5898
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.831935    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.832311    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:43.319181    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:43.319181    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.319296    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.319296    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.323198    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:43.323351    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.323351    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Audit-Id: 2a706883-01e4-4692-8f8b-32c9ba64a60b
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.323351    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.323592    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:43.324179    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:43.324290    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.324290    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.324290    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.327599    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:43.327599    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.327599    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.327599    4352 round_trippers.go:580]     Audit-Id: af624360-28f1-453f-aa3c-401d700a0a93
	I0501 04:16:43.328015    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.328015    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.328015    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.328015    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.328109    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:43.328109    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:43.817904    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:43.817904    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.817904    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.818132    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.822650    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:43.822741    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.822804    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.822804    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Audit-Id: ddb84727-2e00-48f6-8ffa-79ba8e4791d5
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.823028    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:43.823996    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:43.823996    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.823996    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.823996    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.829181    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:43.829181    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.829181    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.829181    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Audit-Id: 417d7059-96ba-4f56-a209-9f6a16f69b4e
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.829925    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:44.318312    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:44.318504    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.318504    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.318504    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.322091    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:44.322668    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.322668    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.322668    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Audit-Id: 12fc0bb8-c5b5-4443-9133-5b1663c3f1b7
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.323677    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:44.324689    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:44.324769    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.324769    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.324769    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.326986    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:44.326986    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Audit-Id: 464d7361-3b96-4818-8877-f0104f516ffd
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.326986    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.326986    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.328360    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:44.817570    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:44.817570    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.817570    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.817570    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.823563    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:44.823818    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.823818    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.823818    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Audit-Id: 210e9e98-d6d0-4330-8815-3a650144cfa1
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.823818    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:44.824850    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:44.824850    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.824850    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.824850    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.828514    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:44.828514    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.828514    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.828514    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Audit-Id: 77df7147-bfe8-4082-92ea-03b366483db2
	I0501 04:16:44.829519    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:45.320005    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:45.320005    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.320005    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.320005    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.325804    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:45.326175    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.326264    4352 round_trippers.go:580]     Audit-Id: aa7771bc-11b1-473d-aeb7-178f186416cc
	I0501 04:16:45.326264    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.326315    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.326315    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.326315    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.326315    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.326315    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:45.327058    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:45.327058    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.327058    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.327058    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.330661    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:45.331046    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.331125    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.331125    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.331236    4352 round_trippers.go:580]     Audit-Id: a4164826-1b6e-4b46-b36d-ef015a8cd88d
	I0501 04:16:45.331410    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.331410    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.331410    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.331479    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:45.332241    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:45.820161    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:45.820339    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.820339    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.820339    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.825275    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:45.825362    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Audit-Id: 647f5b17-4ce0-4a16-aed5-046c5f3c5e3a
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.825362    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.825362    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.826467    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:45.827122    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:45.827122    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.827122    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.827122    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.830762    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:45.830762    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.831094    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.831094    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.831094    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.831150    4352 round_trippers.go:580]     Audit-Id: 6b20194f-307a-40f0-abc0-0d907b959926
	I0501 04:16:45.831150    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.831150    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.831346    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:46.308848    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:46.308848    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.308848    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.309050    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.316225    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:16:46.316225    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Audit-Id: 1aea1e84-6f10-4741-8c1e-80b01887d3f3
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.316225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.316225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.316225    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:46.317459    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:46.317512    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.317512    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.317512    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.320811    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:46.320966    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.320966    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.320966    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.320966    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.320966    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.321050    4352 round_trippers.go:580]     Audit-Id: 0ff294d3-ecc4-4031-b144-7884868291a8
	I0501 04:16:46.321050    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.321318    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:46.820126    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:46.820126    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.820342    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.820342    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.825098    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:46.825161    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.825161    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.825161    4352 round_trippers.go:580]     Audit-Id: e5185d22-df2e-41fd-9371-0e5a8e2310c2
	I0501 04:16:46.825161    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.825232    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.825232    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.825232    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.825389    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:46.826213    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:46.826213    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.826273    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.826273    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.831078    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:46.831835    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.831835    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Audit-Id: 19165879-fa5c-4ca0-ac9e-bea727409296
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.831911    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.832126    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:47.308311    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:47.308375    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.308438    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.308499    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.312357    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:47.312357    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.312357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.312357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.312357    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.312357    4352 round_trippers.go:580]     Audit-Id: ab29da2f-90dc-4b15-a300-60e603bb44fd
	I0501 04:16:47.312357    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.312876    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.313062    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:47.313652    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:47.313652    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.313652    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.313652    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.316289    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:47.316289    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.316289    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.316289    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Audit-Id: 20a36e96-a5ae-44f8-bea6-de011ecd7041
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.317483    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:47.815469    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:47.815533    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.815533    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.815533    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.824952    4352 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 04:16:47.824952    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.824952    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.824952    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.824952    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.825233    4352 round_trippers.go:580]     Audit-Id: fc54fbbd-551a-40a4-bdf6-4990be9879d0
	I0501 04:16:47.825233    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.825233    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.825442    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:47.826202    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:47.826202    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.826264    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.826264    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.831050    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:47.831050    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Audit-Id: 0ec61068-d881-4fae-a1f6-a3c0ea65f3b9
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.831050    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.831050    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.832043    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:47.832043    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:48.309930    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:48.310166    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.310166    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.310166    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.313823    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:48.313823    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.313823    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.313823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.313823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.314183    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.314183    4352 round_trippers.go:580]     Audit-Id: 74ee7c3f-466a-4357-8bc5-08168ccfca95
	I0501 04:16:48.314183    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.314399    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:48.315062    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:48.315062    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.315062    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.315062    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.320713    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:48.320713    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.320713    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.320713    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.320713    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.320713    4352 round_trippers.go:580]     Audit-Id: fb8a94c9-10d0-4be4-82f4-1cdff8d0aafc
	I0501 04:16:48.321145    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.321145    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.321403    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:48.816008    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:48.816008    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.816008    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.816008    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.820624    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:48.820624    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.820624    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.820624    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.820624    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.821307    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.821307    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.821307    4352 round_trippers.go:580]     Audit-Id: 6650ae99-ce6b-4a01-8848-7fa28f69f5c2
	I0501 04:16:48.821574    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:48.822492    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:48.822569    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.822569    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.822569    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.826741    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:48.826741    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.826741    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.826741    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Audit-Id: 0221ed10-22a2-4f86-a0c9-9fa755095823
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.828246    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.321034    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:49.321034    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.321034    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.321034    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.324469    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.325245    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Audit-Id: 2827705c-c665-449b-af3c-da67511d2506
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.325245    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.325245    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.325906    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1973","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0501 04:16:49.326765    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.326765    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.326765    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.326765    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.329347    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.329347    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Audit-Id: 516142ff-e58d-4e2e-8fb0-340127a3b761
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.330007    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.330007    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.330307    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.330657    4352 pod_ready.go:92] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.330815    4352 pod_ready.go:81] duration metric: took 32.5243737s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.330815    4352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.330932    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x9zrw
	I0501 04:16:49.330932    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.330984    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.330984    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.338153    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:16:49.338153    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.338153    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Audit-Id: fdd5b4ff-00f3-41fa-9f54-7de75e884cbf
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.338153    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.338775    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x9zrw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0b91b14d-bed3-4889-b193-db53daccd395","resourceVersion":"1980","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0501 04:16:49.338853    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.338853    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.338853    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.338853    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.342177    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.342177    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.342177    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.342177    4352 round_trippers.go:580]     Audit-Id: c19cfc68-2d1e-457f-8a84-2bd7acb1bde6
	I0501 04:16:49.342177    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.342262    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.342262    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.342262    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.342651    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.343297    4352 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.343297    4352 pod_ready.go:81] duration metric: took 12.4822ms for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.343297    4352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.343297    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-289800
	I0501 04:16:49.343297    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.343297    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.343297    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.347152    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.347152    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.347152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.347152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Audit-Id: 7ffb1de5-6949-49b9-8f16-0e18ce9bcaa4
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.347746    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"aaf534b6-9f4c-445d-afb9-bd225e1a77fd","resourceVersion":"1847","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.199:2379","kubernetes.io/config.hash":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.mirror":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.seen":"2024-05-01T04:15:36.949387188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0501 04:16:49.348320    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.348320    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.348320    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.348320    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.352033    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.352033    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.352033    4352 round_trippers.go:580]     Audit-Id: c215dc0b-3a6e-4cea-bd2a-5f9b94be5f30
	I0501 04:16:49.352033    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.352310    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.352310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.352310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.352310    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.352430    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.353029    4352 pod_ready.go:92] pod "etcd-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.353029    4352 pod_ready.go:81] duration metric: took 9.7319ms for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.353029    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.353029    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-289800
	I0501 04:16:49.353029    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.353029    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.353029    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.357659    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:49.357659    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Audit-Id: b6fbfba9-c32d-4b60-bf5b-da27cbc662c7
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.357659    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.357659    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.358017    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-289800","namespace":"kube-system","uid":"0ee77673-e4b3-4fba-a855-ef6876337257","resourceVersion":"1869","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.209.199:8443","kubernetes.io/config.hash":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.mirror":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.seen":"2024-05-01T04:15:36.865099961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0501 04:16:49.358796    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.358796    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.358880    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.358880    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.361667    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.361667    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.361667    4352 round_trippers.go:580]     Audit-Id: ed5e001c-d640-4349-8945-58c4c6ba5b0e
	I0501 04:16:49.361667    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.361920    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.361920    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.361920    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.361920    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.361920    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.361920    4352 pod_ready.go:92] pod "kube-apiserver-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.361920    4352 pod_ready.go:81] duration metric: took 8.8909ms for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.361920    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.361920    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 04:16:49.361920    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.361920    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.361920    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.364649    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.365660    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.365660    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.365660    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.365660    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.365660    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.365737    4352 round_trippers.go:580]     Audit-Id: 8646db6a-9c0a-43b7-a07e-1216025e6d77
	I0501 04:16:49.365737    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.366135    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-289800","namespace":"kube-system","uid":"fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318","resourceVersion":"1851","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.mirror":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.seen":"2024-05-01T03:52:15.688763845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0501 04:16:49.366804    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.366804    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.366804    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.366865    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.369511    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.369511    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.369511    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.369511    4352 round_trippers.go:580]     Audit-Id: ced06db6-05fd-4fa1-b25d-1a2b3ee345de
	I0501 04:16:49.369823    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.369823    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.369823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.369823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.370161    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.370603    4352 pod_ready.go:92] pod "kube-controller-manager-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.370651    4352 pod_ready.go:81] duration metric: took 8.7312ms for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.370651    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.524357    4352 request.go:629] Waited for 153.4057ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:16:49.524480    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:16:49.524480    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.524480    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.524480    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.528150    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.528150    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.528150    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.528150    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Audit-Id: 448f78a3-3ad6-4831-b469-33fd74811230
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.529102    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bp9zx","generateName":"kube-proxy-","namespace":"kube-system","uid":"aba82e50-b8f8-40b4-b08a-6d045314d6b6","resourceVersion":"1834","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0501 04:16:49.726350    4352 request.go:629] Waited for 196.4559ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.726350    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.726350    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.726350    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.726350    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.731133    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:49.731133    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Audit-Id: da624bcc-5370-43bc-9483-bce41ae6ad1d
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.731133    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.731133    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.731776    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.731776    4352 pod_ready.go:92] pod "kube-proxy-bp9zx" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.732330    4352 pod_ready.go:81] duration metric: took 361.1218ms for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.732330    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.929639    4352 request.go:629] Waited for 197.0521ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:16:49.929844    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:16:49.929844    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.929907    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.929929    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.934273    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:49.934273    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.934273    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.934686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.934686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.934686    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.934686    4352 round_trippers.go:580]     Audit-Id: b1b182d0-ac4d-416b-8348-8854216aeac0
	I0501 04:16:49.934686    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.935287    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g8mbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef0e1817-6682-4b8f-affa-c10021247006","resourceVersion":"1723","creationTimestamp":"2024-05-01T04:00:13Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:00:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0501 04:16:50.130596    4352 request.go:629] Waited for 194.3651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:16:50.130692    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:16:50.130692    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.130692    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.130692    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.135295    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:50.135295    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.135295    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.135295    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Audit-Id: c18cd5b5-567b-46e6-a05c-1003a8919fae
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.135426    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m03","uid":"851df850-b222-4fa2-aca7-3694c4d89ab5","resourceVersion":"1905","creationTimestamp":"2024-05-01T04:11:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T04_11_04_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:11:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0501 04:16:50.135964    4352 pod_ready.go:97] node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:16:50.136013    4352 pod_ready.go:81] duration metric: took 403.6799ms for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	E0501 04:16:50.136013    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:16:50.136013    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:50.334046    4352 request.go:629] Waited for 197.6293ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:16:50.334137    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:16:50.334137    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.334292    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.334292    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.337674    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:50.337674    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.337674    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Audit-Id: 0a238c3a-6896-4a17-8f27-02c106c4e45b
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.337674    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.338638    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rlzp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b37d8d5d-a7cb-4848-a8a2-11d9761e08d6","resourceVersion":"1957","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0501 04:16:50.535485    4352 request.go:629] Waited for 195.8126ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:16:50.535603    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:16:50.535603    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.535603    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.535603    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.539440    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:50.539669    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Audit-Id: de58e7b6-2272-48cc-80c4-c7bf12d53af9
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.539669    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.539669    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.540066    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"1961","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0501 04:16:50.541332    4352 pod_ready.go:97] node "multinode-289800-m02" hosting pod "kube-proxy-rlzp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m02" has status "Ready":"Unknown"
	I0501 04:16:50.541332    4352 pod_ready.go:81] duration metric: took 405.316ms for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	E0501 04:16:50.541332    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800-m02" hosting pod "kube-proxy-rlzp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m02" has status "Ready":"Unknown"
	I0501 04:16:50.541332    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:50.721420    4352 request.go:629] Waited for 179.9116ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:16:50.721781    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:16:50.722009    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.722054    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.722093    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.727766    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:50.727766    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Audit-Id: dab53051-f4be-4d88-b09d-de99470205d1
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.727766    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.727766    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.727766    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-289800","namespace":"kube-system","uid":"c7518f03-993b-432f-b742-8805dd2167a7","resourceVersion":"1859","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.mirror":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.seen":"2024-05-01T03:52:15.688771544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0501 04:16:50.921262    4352 request.go:629] Waited for 192.5213ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:50.921547    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:50.921635    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.921635    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.921635    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.926030    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:50.926030    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.926030    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.926030    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Audit-Id: ba356631-09f9-4fbd-ac9c-00af14bd5065
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.926531    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:50.927168    4352 pod_ready.go:92] pod "kube-scheduler-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:50.927168    4352 pod_ready.go:81] duration metric: took 385.8333ms for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:50.927168    4352 pod_ready.go:38] duration metric: took 34.1328801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:16:50.927168    4352 api_server.go:52] waiting for apiserver process to appear ...
	I0501 04:16:50.938181    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0501 04:16:50.965048    4352 command_runner.go:130] > 18cd30f3ad28
	I0501 04:16:50.965141    4352 logs.go:276] 1 containers: [18cd30f3ad28]
	I0501 04:16:50.978908    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0501 04:16:51.004860    4352 command_runner.go:130] > 34892fdb6898
	I0501 04:16:51.005091    4352 logs.go:276] 1 containers: [34892fdb6898]
	I0501 04:16:51.017307    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0501 04:16:51.044094    4352 command_runner.go:130] > b8a9b405d76b
	I0501 04:16:51.044170    4352 command_runner.go:130] > 8a0208aeafcf
	I0501 04:16:51.044170    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:16:51.044170    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:16:51.044170    4352 logs.go:276] 4 containers: [b8a9b405d76b 8a0208aeafcf 15c4496e3a9f 3e8d5ff9a9e4]
	I0501 04:16:51.055977    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0501 04:16:51.080738    4352 command_runner.go:130] > eaf69fce5ee3
	I0501 04:16:51.080738    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:16:51.080738    4352 logs.go:276] 2 containers: [eaf69fce5ee3 06f1f84bfde1]
	I0501 04:16:51.090727    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0501 04:16:51.117757    4352 command_runner.go:130] > 3efcc92f817e
	I0501 04:16:51.117757    4352 command_runner.go:130] > 502684407b0c
	I0501 04:16:51.117757    4352 logs.go:276] 2 containers: [3efcc92f817e 502684407b0c]
	I0501 04:16:51.130211    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0501 04:16:51.160199    4352 command_runner.go:130] > 66a1b89e6733
	I0501 04:16:51.160199    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:16:51.160199    4352 logs.go:276] 2 containers: [66a1b89e6733 4b62556f40be]
	I0501 04:16:51.173257    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0501 04:16:51.199011    4352 command_runner.go:130] > b7cae3f6b88b
	I0501 04:16:51.199121    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:16:51.199121    4352 logs.go:276] 2 containers: [b7cae3f6b88b 6d5f881ef398]
	I0501 04:16:51.199121    4352 logs.go:123] Gathering logs for etcd [34892fdb6898] ...
	I0501 04:16:51.199121    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34892fdb6898"
	I0501 04:16:51.230530    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.997417Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:51.231068    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998475Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.209.199:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.209.199:2380","--initial-cluster=multinode-289800=https://172.28.209.199:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.209.199:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.209.199:2380","--name=multinode-289800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0501 04:16:51.231127    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998558Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0501 04:16:51.231178    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.998588Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:51.231178    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998599Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.209.199:2380"]}
	I0501 04:16:51.231251    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998626Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:51.231305    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.006405Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"]}
	I0501 04:16:51.231410    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.007658Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-289800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0501 04:16:51.231410    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.030589Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.951987ms"}
	I0501 04:16:51.231476    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.081537Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0501 04:16:51.231476    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","commit-index":2020}
	I0501 04:16:51.231542    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=()"}
	I0501 04:16:51.231542    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became follower at term 2"}
	I0501 04:16:51.231542    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fe483b81e7b7d166 [peers: [], term: 2, commit: 2020, applied: 0, lastindex: 2020, lastterm: 2]"}
	I0501 04:16:51.231605    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:39.121672Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0501 04:16:51.231605    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.127575Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1352}
	I0501 04:16:51.231605    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.132217Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1744}
	I0501 04:16:51.231675    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.144206Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.15993Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"fe483b81e7b7d166","timeout":"7s"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"fe483b81e7b7d166"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160545Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fe483b81e7b7d166","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.16402Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.165851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0501 04:16:51.231819    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0501 04:16:51.231819    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0501 04:16:51.231819    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	I0501 04:16:51.231886    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	I0501 04:16:51.231928    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	I0501 04:16:51.231950    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0501 04:16:51.231994    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:51.232051    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0501 04:16:51.232051    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0501 04:16:51.232114    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	I0501 04:16:51.232114    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	I0501 04:16:51.232114    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	I0501 04:16:51.232180    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	I0501 04:16:51.232180    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	I0501 04:16:51.232180    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	I0501 04:16:51.232246    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	I0501 04:16:51.232246    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	I0501 04:16:51.232304    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	I0501 04:16:51.232304    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	I0501 04:16:51.232304    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:51.232444    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:51.232484    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	I0501 04:16:51.232531    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0501 04:16:51.232531    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0501 04:16:51.232589    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0501 04:16:51.245319    4352 logs.go:123] Gathering logs for coredns [b8a9b405d76b] ...
	I0501 04:16:51.245412    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a9b405d76b"
	I0501 04:16:51.275786    4352 command_runner.go:130] > .:53
	I0501 04:16:51.275786    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:51.275786    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:51.275786    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:51.275786    4352 command_runner.go:130] > [INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	I0501 04:16:51.275786    4352 logs.go:123] Gathering logs for kube-proxy [3efcc92f817e] ...
	I0501 04:16:51.275786    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efcc92f817e"
	I0501 04:16:51.303222    4352 command_runner.go:130] ! I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:51.303222    4352 command_runner.go:130] ! I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:16:51.303222    4352 command_runner.go:130] ! I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:51.303803    4352 command_runner.go:130] ! I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:51.303803    4352 command_runner.go:130] ! I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:51.303856    4352 command_runner.go:130] ! I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:51.303880    4352 command_runner.go:130] ! I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:51.303880    4352 command_runner.go:130] ! I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.303923    4352 command_runner.go:130] ! I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:16:51.303923    4352 command_runner.go:130] ! I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:51.303982    4352 command_runner.go:130] ! I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:51.303982    4352 command_runner.go:130] ! I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:51.304046    4352 command_runner.go:130] ! I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:16:51.304046    4352 command_runner.go:130] ! I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:51.304046    4352 command_runner.go:130] ! I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:51.304102    4352 command_runner.go:130] ! I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:51.304102    4352 command_runner.go:130] ! I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:51.306103    4352 logs.go:123] Gathering logs for Docker ...
	I0501 04:16:51.306223    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0501 04:16:51.346738    4352 command_runner.go:130] > May 01 04:14:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.346817    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.346817    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.346817    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:51.347083    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.347211    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.347283    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.347340    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.347340    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.347415    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:51.347496    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:51.347540    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.347540    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.347581    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.347675    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.347867    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:51.347941    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:51.348012    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.348056    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.348128    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.348214    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:51.348214    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.653438562Z" level=info msg="Starting up"
	I0501 04:16:51.348299    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.657791992Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:51.348332    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.663198880Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0501 04:16:51.348332    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.702542137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:51.348332    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732549261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:51.348412    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732711054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:51.348439    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732864148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:51.348439    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732947945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348439    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734019203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.348521    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734463486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348546    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735002764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.348546    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735178358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348666    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735234755Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:51.348706    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735254555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348706    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735695937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348777    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.736590002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348823    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739236298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.348862    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739286896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348962    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739479489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.349004    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739575785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:51.349078    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740111064Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740186861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740203361Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747848861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747973456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:51.349188    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748003155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:51.349188    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748021254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:51.349234    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748087351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:51.349234    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748176348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:51.349290    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748553033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.349314    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748726426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.349314    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748830822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:51.349387    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748853521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:51.349414    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748872121Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349414    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748887020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748901420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748916819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748932318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748946618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748960717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748974817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748996916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749013215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749071613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749094412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749109411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749127511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749141410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749156310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749171209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749210407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749241106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749261705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749287004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749377501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749401900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749458198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:51.350019    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749553894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:51.350019    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749626691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749759886Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749839283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749953278Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749974077Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:51.350198    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750421860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:51.350198    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750811045Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:51.350198    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751024636Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:51.350262    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751103833Z" level=info msg="containerd successfully booted in 0.052926s"
	I0501 04:16:51.350262    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.725111442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:51.350262    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.993003995Z" level=info msg="Loading containers: start."
	I0501 04:16:51.350325    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.418709237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:51.350325    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.511990518Z" level=info msg="Loading containers: done."
	I0501 04:16:51.350325    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.539659513Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:51.350392    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.540534438Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:51.350392    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.598935417Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:51.350450    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:51.350450    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.599463032Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:51.350450    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.764446334Z" level=info msg="Processing signal 'terminated'"
	I0501 04:16:51.350506    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 systemd[1]: Stopping Docker Application Container Engine...
	I0501 04:16:51.350506    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766325752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0501 04:16:51.350561    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766547266Z" level=info msg="Daemon shutdown complete"
	I0501 04:16:51.350561    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766599570Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766627071Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: docker.service: Deactivated successfully.
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Stopped Docker Application Container Engine.
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:51.350672    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.848356633Z" level=info msg="Starting up"
	I0501 04:16:51.350672    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.852105170Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:51.350727    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.856097222Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1051
	I0501 04:16:51.350727    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.886653253Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:51.350727    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918280652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:51.350821    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918435561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:51.350896    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918674977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:51.350938    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918835587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.350938    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918914392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.350938    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919007298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919224411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919342019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919363920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919374921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919401422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351136    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919522430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922355909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922472116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922606725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922701131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922740333Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:51.351292    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922844740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:51.351292    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922863441Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:51.351330    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923199662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:51.351330    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923345572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:51.351330    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923371973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:51.351406    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923387074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:51.351406    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923416076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:51.351406    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923482380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:51.351508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923717595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.351508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923914208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.351562    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924012314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:51.351607    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924084218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:51.351659    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924103120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351659    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924116520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351659    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924137922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351738    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924154823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351738    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924172824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351825    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924195925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351880    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924208026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351905    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924219327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351905    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352090    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924285031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352115    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924297632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352115    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924325534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352115    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352191    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924348235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924360536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924373137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924390538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924403039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352297    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924414139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352297    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924426140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352352    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924440741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:51.352392    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924459642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352537    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924475143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352616    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924504745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:51.352642    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924545247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:51.352642    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924640554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:51.352714    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924658655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:51.352740    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924671555Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:51.352740    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924736560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352864    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924890569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:51.352952    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924908370Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:51.352998    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925252392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:51.352998    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925540810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:51.352998    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925606615Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:51.353056    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925720522Z" level=info msg="containerd successfully booted in 0.040328s"
	I0501 04:16:51.353056    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.902259635Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:51.353056    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.938734241Z" level=info msg="Loading containers: start."
	I0501 04:16:51.353164    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.252276255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:51.353247    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.346319398Z" level=info msg="Loading containers: done."
	I0501 04:16:51.353299    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374198460Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:51.353299    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374439776Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:51.353299    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424572544Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:51.353380    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424740154Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:51.353419    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:51.353419    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.353459    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.353459    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.353459    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.353514    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0501 04:16:51.353514    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Loaded network plugin cni"
	I0501 04:16:51.353562    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0501 04:16:51.353562    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0501 04:16:51.353622    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0501 04:16:51.353675    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0501 04:16:51.353694    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-8w9hq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88\""
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-cc6mk_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5\""
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-x9zrw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193\""
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.812954162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813140474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813251281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813750813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908552604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908932028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908977330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.909354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8e27176eab83655d3f2a52c63326669ef8c796c68155930f53f421789d826f1/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022633513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022720619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022735220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.024008700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354390    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032104212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354390    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032117713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354390    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032205718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354463    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fd53aa8d8f5d6402b604adf1c8c8ae2b5a8c80b90e94152f45e7cb16a71fe46/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.354496    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51e331e75da779107616d5efa0d497152d9c85407f1c172c9ae536bcc2b22bad/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.354546    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.361204210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366294631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366382437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366929671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427356590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427966129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428178542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428971092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563334483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563717708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568278296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568462908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619028803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619423228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619676644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.620258481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.647452681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648388440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648703160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650660084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650945902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.652733715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.653556567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703188303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703325612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703348713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.704951615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160153282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160628512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160751120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.161166246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303671652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303759357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304597710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304856126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355908    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623383256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355908    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623630372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623719877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.624154405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1045]: time="2024-05-01T04:16:15.086534690Z" level=info msg="ignoring event" container=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087315924Z" level=info msg="shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087789544Z" level=warning msg="cleaning up after shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.089400515Z" level=info msg="cleaning up dead shim" namespace=moby
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233206077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233350185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233373086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.235465402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.458837761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.459864323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464281891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464897329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543149980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543283788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543320690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543548404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598181021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598854262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.599065375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.600816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b85f507755ab5fd65a5328f5567d969dd5f974c01ee4c5d8e38f03dc6ec900a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.282921443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283150129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283743193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.291296831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360201124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360588900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360677995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.361100969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575166498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575320589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.357033    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.576248232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.357033    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:51.357280    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:51.357340    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:51.390474    4352 logs.go:123] Gathering logs for kube-apiserver [18cd30f3ad28] ...
	I0501 04:16:51.390474    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd30f3ad28"
	I0501 04:16:51.422930    4352 command_runner.go:130] ! I0501 04:15:39.445795       1 options.go:221] external host was not specified, using 172.28.209.199
	I0501 04:16:51.422930    4352 command_runner.go:130] ! I0501 04:15:39.453956       1 server.go:148] Version: v1.30.0
	I0501 04:16:51.423357    4352 command_runner.go:130] ! I0501 04:15:39.454357       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.423357    4352 command_runner.go:130] ! I0501 04:15:40.258184       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 04:16:51.423357    4352 command_runner.go:130] ! I0501 04:15:40.258591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.260085       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.260405       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.261810       1 instance.go:299] Using reconciler: lease
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.801281       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:40.801386       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.090803       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.091252       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.359171       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.532740       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.570911       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.571018       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.571046       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.571875       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.572053       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.573317       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.574692       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.574726       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.574734       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.576633       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.576726       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.577645       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.577739       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.577748       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.578543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.578618       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.578731       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.579623       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.582482       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.582572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.582581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.583284       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.583417       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.583428       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.585084       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.585203       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.588956       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.589055       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.589067       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.589951       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.590056       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.590066       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.424317    4352 command_runner.go:130] ! I0501 04:15:41.593577       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0501 04:16:51.424317    4352 command_runner.go:130] ! W0501 04:15:41.593674       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.593684       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.595694       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.597680       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.597864       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.597875       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.603955       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.604059       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.604069       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.607445       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0501 04:16:51.424486    4352 command_runner.go:130] ! W0501 04:15:41.607533       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424486    4352 command_runner.go:130] ! W0501 04:15:41.607543       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.424534    4352 command_runner.go:130] ! I0501 04:15:41.608797       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0501 04:16:51.424534    4352 command_runner.go:130] ! W0501 04:15:41.608817       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424871    4352 command_runner.go:130] ! I0501 04:15:41.625599       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0501 04:16:51.425324    4352 command_runner.go:130] ! W0501 04:15:41.625618       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.425515    4352 command_runner.go:130] ! I0501 04:15:42.332139       1 secure_serving.go:213] Serving securely on [::]:8443
	I0501 04:16:51.425573    4352 command_runner.go:130] ! I0501 04:15:42.332337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:51.425573    4352 command_runner.go:130] ! I0501 04:15:42.332595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:51.425573    4352 command_runner.go:130] ! I0501 04:15:42.333006       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0501 04:16:51.425642    4352 command_runner.go:130] ! I0501 04:15:42.333577       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 04:16:51.425642    4352 command_runner.go:130] ! I0501 04:15:42.333909       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:51.425695    4352 command_runner.go:130] ! I0501 04:15:42.334990       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 04:16:51.425695    4352 command_runner.go:130] ! I0501 04:15:42.335027       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 04:16:51.425695    4352 command_runner.go:130] ! I0501 04:15:42.335107       1 aggregator.go:163] waiting for initial CRD sync...
	I0501 04:16:51.425744    4352 command_runner.go:130] ! I0501 04:15:42.335378       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0501 04:16:51.425767    4352 command_runner.go:130] ! I0501 04:15:42.335424       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0501 04:16:51.425767    4352 command_runner.go:130] ! I0501 04:15:42.335517       1 available_controller.go:423] Starting AvailableConditionController
	I0501 04:16:51.425805    4352 command_runner.go:130] ! I0501 04:15:42.335533       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 04:16:51.425805    4352 command_runner.go:130] ! I0501 04:15:42.335556       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0501 04:16:51.425853    4352 command_runner.go:130] ! I0501 04:15:42.337835       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0501 04:16:51.425853    4352 command_runner.go:130] ! I0501 04:15:42.338196       1 controller.go:116] Starting legacy_token_tracking_controller
	I0501 04:16:51.425853    4352 command_runner.go:130] ! I0501 04:15:42.338360       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0501 04:16:51.425920    4352 command_runner.go:130] ! I0501 04:15:42.338519       1 controller.go:78] Starting OpenAPI AggregationController
	I0501 04:16:51.425920    4352 command_runner.go:130] ! I0501 04:15:42.339167       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0501 04:16:51.425920    4352 command_runner.go:130] ! I0501 04:15:42.339360       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0501 04:16:51.426076    4352 command_runner.go:130] ! I0501 04:15:42.339853       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 04:16:51.426076    4352 command_runner.go:130] ! I0501 04:15:42.361139       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 04:16:51.426076    4352 command_runner.go:130] ! I0501 04:15:42.361155       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 04:16:51.426138    4352 command_runner.go:130] ! I0501 04:15:42.361192       1 controller.go:139] Starting OpenAPI controller
	I0501 04:16:51.426138    4352 command_runner.go:130] ! I0501 04:15:42.361219       1 controller.go:87] Starting OpenAPI V3 controller
	I0501 04:16:51.426138    4352 command_runner.go:130] ! I0501 04:15:42.361233       1 naming_controller.go:291] Starting NamingConditionController
	I0501 04:16:51.426193    4352 command_runner.go:130] ! I0501 04:15:42.361253       1 establishing_controller.go:76] Starting EstablishingController
	I0501 04:16:51.426255    4352 command_runner.go:130] ! I0501 04:15:42.361274       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 04:16:51.426336    4352 command_runner.go:130] ! I0501 04:15:42.361288       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 04:16:51.426336    4352 command_runner.go:130] ! I0501 04:15:42.361301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 04:16:51.426336    4352 command_runner.go:130] ! I0501 04:15:42.395816       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:51.426397    4352 command_runner.go:130] ! I0501 04:15:42.396242       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:51.426453    4352 command_runner.go:130] ! I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:16:51.426453    4352 command_runner.go:130] ! I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:16:51.426769    4352 command_runner.go:130] ! I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:51.426769    4352 command_runner.go:130] ! I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:16:51.426769    4352 command_runner.go:130] ! I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:16:51.426830    4352 command_runner.go:130] ! I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 04:16:51.426830    4352 command_runner.go:130] ! W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:16:51.426830    4352 command_runner.go:130] ! I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:16:51.426830    4352 command_runner.go:130] ! I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:16:51.426966    4352 command_runner.go:130] ! I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 04:16:51.438427    4352 logs.go:123] Gathering logs for coredns [8a0208aeafcf] ...
	I0501 04:16:51.438972    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0208aeafcf"
	I0501 04:16:51.474551    4352 command_runner.go:130] > .:53
	I0501 04:16:51.474647    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:51.474647    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:51.474647    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:51.474684    4352 command_runner.go:130] > [INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	I0501 04:16:51.474684    4352 logs.go:123] Gathering logs for coredns [15c4496e3a9f] ...
	I0501 04:16:51.474684    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15c4496e3a9f"
	I0501 04:16:51.513087    4352 command_runner.go:130] > .:53
	I0501 04:16:51.513087    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:51.513087    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:51.513087    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:51.513087    4352 command_runner.go:130] > [INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	I0501 04:16:51.513847    4352 command_runner.go:130] > [INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	I0501 04:16:51.513847    4352 command_runner.go:130] > [INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	I0501 04:16:51.514175    4352 command_runner.go:130] > [INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	I0501 04:16:51.514175    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:51.514175    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:51.515850    4352 logs.go:123] Gathering logs for kube-scheduler [06f1f84bfde1] ...
	I0501 04:16:51.515884    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f1f84bfde1"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:10.476758       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175551       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175587       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.246151       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.246312       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.251800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.252170       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.253709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.254160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.257352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! E0501 03:52:12.257411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.261549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! E0501 03:52:12.261670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.263856       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! E0501 03:52:12.263906       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.556617    4352 command_runner.go:130] ! W0501 03:52:12.270038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.270597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.271080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.271309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.271808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.272098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.272396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.273177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.273396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.273765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.273964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.274273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.274741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.275083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.275448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.275752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557369    4352 command_runner.go:130] ! W0501 03:52:12.276841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.278071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:12.277438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.278555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:12.279824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.280326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:12.280272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.280893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.100723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.101238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.102451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.102804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.188414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.188662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.558042    4352 command_runner.go:130] ! E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	I0501 04:16:51.572999    4352 logs.go:123] Gathering logs for kube-proxy [502684407b0c] ...
	I0501 04:16:51.572999    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502684407b0c"
	I0501 04:16:51.604012    4352 command_runner.go:130] ! I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:51.605073    4352 logs.go:123] Gathering logs for kube-controller-manager [66a1b89e6733] ...
	I0501 04:16:51.605073    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1b89e6733"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:39.740014       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.254324       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.254368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.263842       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.264273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.265102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.265435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.420436       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.421597       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.430683       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.430949       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.431056       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.437281       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.440408       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.437711       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.440933       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.450877       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.452935       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.452958       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.458231       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.458525       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.458548       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.467611       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.468036       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.468093       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.468107       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.484825       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.484856       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.484892       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.485128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.485186       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.485221       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.485229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.485246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.485322       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.488601       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.488943       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.488958       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.488985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.523143       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:51.643606    4352 command_runner.go:130] ! I0501 04:15:44.644894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:51.643606    4352 command_runner.go:130] ! I0501 04:15:44.645016       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:51.643645    4352 command_runner.go:130] ! I0501 04:15:44.645088       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:51.643682    4352 command_runner.go:130] ! I0501 04:15:44.645112       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:51.643708    4352 command_runner.go:130] ! I0501 04:15:44.646888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:51.643708    4352 command_runner.go:130] ! W0501 04:15:44.646984       1 shared_informer.go:597] resyncPeriod 15h44m19.234758052s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:51.643708    4352 command_runner.go:130] ! W0501 04:15:44.647035       1 shared_informer.go:597] resyncPeriod 17h52m42.538614251s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:51.643832    4352 command_runner.go:130] ! I0501 04:15:44.647224       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:51.643892    4352 command_runner.go:130] ! I0501 04:15:44.647325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:51.643940    4352 command_runner.go:130] ! I0501 04:15:44.647389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:51.643940    4352 command_runner.go:130] ! I0501 04:15:44.647418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:51.643996    4352 command_runner.go:130] ! I0501 04:15:44.647559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:51.643996    4352 command_runner.go:130] ! I0501 04:15:44.647580       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:51.644037    4352 command_runner.go:130] ! I0501 04:15:44.648269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:51.644083    4352 command_runner.go:130] ! I0501 04:15:44.648364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:51.644123    4352 command_runner.go:130] ! I0501 04:15:44.648387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648519       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648582       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:51.644313    4352 command_runner.go:130] ! I0501 04:15:44.648601       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:51.644313    4352 command_runner.go:130] ! I0501 04:15:44.648633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:51.644366    4352 command_runner.go:130] ! I0501 04:15:44.648662       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.649971       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.649999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.650094       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.658545       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.664070       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.664109       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.672333       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.672648       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.673224       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:51.644575    4352 command_runner.go:130] ! E0501 04:15:44.680086       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.680207       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.686271       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.687804       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.688087       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.691064       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:51.645583    4352 command_runner.go:130] ! I0501 04:15:44.694139       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.694154       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.697309       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.697808       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.698725       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.709020       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.709557       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.718572       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.718866       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731386       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731502       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731520       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731794       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.732008       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.732024       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.732060       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.739601       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.741937       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.742091       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.751335       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.758177       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.767021       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.776399       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.777830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.780031       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.783346       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.784386       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.784668       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.790586       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.791028       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.791148       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.795072       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.795486       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.796321       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.806964       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.807399       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.808302       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.810677       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.811276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.812128       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.814338       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.814699       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.815465       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.818437       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.819004       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.818976       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.820305       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.820518       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.822359       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.824878       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.825167       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.835687       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.835705       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.835739       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.836623       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! E0501 04:15:44.845522       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.845590       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.975590       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.975737       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.026863       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.026966       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.026980       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.188029       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.191154       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.191606       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.234916       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.235592       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.235855       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.275946       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.276219       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.277151       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.277668       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347039       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347226       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347657       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347697       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.351170       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.351453       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.351701       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.352658       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.355868       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.356195       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.356581       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.373530       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.375966       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.376087       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.376099       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.381581       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.387752       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.398512       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.398855       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.433745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.433841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.434861       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.437855       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438314       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438445       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.441880       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.442281       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448289       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448378       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448532       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448615       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.452662       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.453060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.453136       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:51.647557    4352 command_runner.go:130] ! I0501 04:15:55.459094       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:51.647610    4352 command_runner.go:130] ! I0501 04:15:55.465378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:51.647610    4352 command_runner.go:130] ! I0501 04:15:55.468998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:51.647610    4352 command_runner.go:130] ! I0501 04:15:55.476103       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.479405       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.480400       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.485347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.485423       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:51.647762    4352 command_runner.go:130] ! I0501 04:15:55.485459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:51.647762    4352 command_runner.go:130] ! I0501 04:15:55.488987       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:51.647797    4352 command_runner.go:130] ! I0501 04:15:55.489270       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:51.647797    4352 command_runner.go:130] ! I0501 04:15:55.492066       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:51.647797    4352 command_runner.go:130] ! I0501 04:15:55.492447       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:51.647832    4352 command_runner.go:130] ! I0501 04:15:55.494972       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.497059       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.499153       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.499594       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.509506       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.513444       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.517356       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.519269       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.521379       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.527109       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.533712       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.534052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562220       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562294       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562374       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:51.648551    4352 command_runner.go:130] ! I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:51.648700    4352 command_runner.go:130] ! I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:51.648877    4352 command_runner.go:130] ! I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:51.648924    4352 command_runner.go:130] ! I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	I0501 04:16:51.666522    4352 logs.go:123] Gathering logs for kindnet [b7cae3f6b88b] ...
	I0501 04:16:51.667538    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7cae3f6b88b"
	I0501 04:16:51.701538    4352 command_runner.go:130] ! I0501 04:15:45.341459       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.342196       1 main.go:107] hostIP = 172.28.209.199
	I0501 04:16:51.701634    4352 command_runner.go:130] ! podIP = 172.28.209.199
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.343348       1 main.go:116] setting mtu 1500 for CNI 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.343391       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.343412       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.765193       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.817499       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.817549       1 main.go:227] handling current node
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818026       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818042       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818289       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.219.162 Flags: [] Table: 0} 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818416       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818477       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818548       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.834949       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.834995       1 main.go:227] handling current node
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835008       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835016       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835192       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835220       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845752       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845835       1 main.go:227] handling current node
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845848       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845856       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.846322       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.846423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:45.855212       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.855323       1 main.go:227] handling current node
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.855339       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.855347       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.856266       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.702257    4352 command_runner.go:130] ! I0501 04:16:45.856305       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.705299    4352 logs.go:123] Gathering logs for kindnet [6d5f881ef398] ...
	I0501 04:16:51.705379    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d5f881ef398"
	I0501 04:16:51.753678    4352 command_runner.go:130] ! I0501 04:01:59.122485       1 main.go:227] handling current node
	I0501 04:16:51.753770    4352 command_runner.go:130] ! I0501 04:01:59.122501       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.753770    4352 command_runner.go:130] ! I0501 04:01:59.122510       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.753770    4352 command_runner.go:130] ! I0501 04:01:59.122690       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.753823    4352 command_runner.go:130] ! I0501 04:01:59.122722       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.753823    4352 command_runner.go:130] ! I0501 04:02:09.153658       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153775       1 main.go:227] handling current node
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153793       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153946       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:19.161031       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.753955    4352 command_runner.go:130] ! I0501 04:02:19.161061       1 main.go:227] handling current node
	I0501 04:16:51.753955    4352 command_runner.go:130] ! I0501 04:02:19.161073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.753955    4352 command_runner.go:130] ! I0501 04:02:19.161079       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754016    4352 command_runner.go:130] ! I0501 04:02:19.161177       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754016    4352 command_runner.go:130] ! I0501 04:02:19.161185       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181653       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181721       1 main.go:227] handling current node
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181735       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181742       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.182277       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.182369       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.195902       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196079       1 main.go:227] handling current node
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196095       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196105       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196558       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754222    4352 command_runner.go:130] ! I0501 04:02:39.196649       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754222    4352 command_runner.go:130] ! I0501 04:02:49.209858       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754222    4352 command_runner.go:130] ! I0501 04:02:49.209973       1 main.go:227] handling current node
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210027       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210041       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210461       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210617       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:59.219550       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.219615       1 main.go:227] handling current node
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.219631       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.219638       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.220333       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754390    4352 command_runner.go:130] ! I0501 04:02:59.220436       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.231302       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.232437       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.232648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.232851       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.233578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.233631       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.245975       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246060       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246081       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246386       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.258941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259020       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259485       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259520       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.269941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270129       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270152       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270161       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270403       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270438       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.282880       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283025       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283045       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283054       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283773       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283792       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297110       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297155       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297169       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297177       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297656       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297688       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:04:09.310638       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:04:09.311476       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:04:09.311969       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:09.312340       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:09.313291       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:09.313332       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:19.324939       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755012    4352 command_runner.go:130] ! I0501 04:04:19.325084       1 main.go:227] handling current node
	I0501 04:16:51.755012    4352 command_runner.go:130] ! I0501 04:04:19.325480       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755012    4352 command_runner.go:130] ! I0501 04:04:19.325493       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755058    4352 command_runner.go:130] ! I0501 04:04:19.325923       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755058    4352 command_runner.go:130] ! I0501 04:04:19.326083       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755099    4352 command_runner.go:130] ! I0501 04:04:29.332468       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755099    4352 command_runner.go:130] ! I0501 04:04:29.332576       1 main.go:227] handling current node
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332619       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332645       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332818       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332831       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:39.342867       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755214    4352 command_runner.go:130] ! I0501 04:04:39.342901       1 main.go:227] handling current node
	I0501 04:16:51.755214    4352 command_runner.go:130] ! I0501 04:04:39.342914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:39.342921       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:39.343433       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:39.343593       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364771       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364905       1 main.go:227] handling current node
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364921       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364930       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.365166       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.365205       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:59.379243       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:59.379352       1 main.go:227] handling current node
	I0501 04:16:51.755358    4352 command_runner.go:130] ! I0501 04:04:59.379369       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755358    4352 command_runner.go:130] ! I0501 04:04:59.379377       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755401    4352 command_runner.go:130] ! I0501 04:04:59.379531       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755401    4352 command_runner.go:130] ! I0501 04:04:59.379564       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755401    4352 command_runner.go:130] ! I0501 04:05:09.389743       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755454    4352 command_runner.go:130] ! I0501 04:05:09.390518       1 main.go:227] handling current node
	I0501 04:16:51.755454    4352 command_runner.go:130] ! I0501 04:05:09.390622       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755454    4352 command_runner.go:130] ! I0501 04:05:09.390636       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755513    4352 command_runner.go:130] ! I0501 04:05:09.390894       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755513    4352 command_runner.go:130] ! I0501 04:05:09.391049       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755540    4352 command_runner.go:130] ! I0501 04:05:19.400837       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755540    4352 command_runner.go:130] ! I0501 04:05:19.401285       1 main.go:227] handling current node
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.401439       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.401572       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.401956       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.402136       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755625    4352 command_runner.go:130] ! I0501 04:05:29.422040       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755625    4352 command_runner.go:130] ! I0501 04:05:29.422249       1 main.go:227] handling current node
	I0501 04:16:51.755667    4352 command_runner.go:130] ! I0501 04:05:29.422285       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755667    4352 command_runner.go:130] ! I0501 04:05:29.422311       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755713    4352 command_runner.go:130] ! I0501 04:05:29.422521       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755713    4352 command_runner.go:130] ! I0501 04:05:29.422723       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429807       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429856       1 main.go:227] handling current node
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429874       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429881       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755811    4352 command_runner.go:130] ! I0501 04:05:39.430903       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755811    4352 command_runner.go:130] ! I0501 04:05:39.431340       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755854    4352 command_runner.go:130] ! I0501 04:05:49.445455       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755854    4352 command_runner.go:130] ! I0501 04:05:49.445594       1 main.go:227] handling current node
	I0501 04:16:51.755854    4352 command_runner.go:130] ! I0501 04:05:49.445610       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755903    4352 command_runner.go:130] ! I0501 04:05:49.445619       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755903    4352 command_runner.go:130] ! I0501 04:05:49.445751       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755938    4352 command_runner.go:130] ! I0501 04:05:49.445765       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461135       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461248       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461264       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461273       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461947       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.462094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469509       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469615       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469636       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469646       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.470218       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.470387       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486501       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486605       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486621       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486629       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486864       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486946       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503311       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503476       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503492       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503503       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503633       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503843       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528749       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528837       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528853       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528861       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.529235       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.529373       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:49.535984       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:49.536067       1 main.go:227] handling current node
	I0501 04:16:51.756550    4352 command_runner.go:130] ! I0501 04:06:49.536082       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756550    4352 command_runner.go:130] ! I0501 04:06:49.536092       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756550    4352 command_runner.go:130] ! I0501 04:06:49.536689       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756602    4352 command_runner.go:130] ! I0501 04:06:49.536802       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756602    4352 command_runner.go:130] ! I0501 04:06:59.550480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756642    4352 command_runner.go:130] ! I0501 04:06:59.551072       1 main.go:227] handling current node
	I0501 04:16:51.756642    4352 command_runner.go:130] ! I0501 04:06:59.551257       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:06:59.551358       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:06:59.551696       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:06:59.551781       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569460       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569627       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569642       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569651       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.570296       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.570434       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577507       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577599       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577615       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577730       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.578102       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.578208       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592703       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592845       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592861       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592869       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.593139       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.593174       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602034       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602064       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602077       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602084       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602283       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602300       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837563       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837638       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837652       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837660       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837875       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837955       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.851818       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.852109       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.852127       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.852753       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.853129       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.853164       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:08:09.860338       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757288    4352 command_runner.go:130] ! I0501 04:08:09.860453       1 main.go:227] handling current node
	I0501 04:16:51.757288    4352 command_runner.go:130] ! I0501 04:08:09.860472       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757288    4352 command_runner.go:130] ! I0501 04:08:09.860482       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:09.860626       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:09.861316       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877403       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877515       1 main.go:227] handling current node
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877530       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877838       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877874       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892899       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892926       1 main.go:227] handling current node
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892937       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892944       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:29.893106       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:29.893180       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:39.901877       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:39.901929       1 main.go:227] handling current node
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.901943       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.901951       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.902578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.902678       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.918941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.919115       1 main.go:227] handling current node
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.919130       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.919139       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:49.919950       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:49.919968       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:59.933101       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:59.933154       1 main.go:227] handling current node
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:59.933648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757794    4352 command_runner.go:130] ! I0501 04:08:59.933667       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757794    4352 command_runner.go:130] ! I0501 04:08:59.934094       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757835    4352 command_runner.go:130] ! I0501 04:08:59.934127       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757835    4352 command_runner.go:130] ! I0501 04:09:09.948569       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757835    4352 command_runner.go:130] ! I0501 04:09:09.948615       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.948629       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.948637       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.949057       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.949076       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958099       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958261       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958282       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958294       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958880       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.959055       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975626       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975765       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975790       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.976360       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.976488       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985296       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985455       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985488       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985497       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.986552       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.986590       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.995944       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996021       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996649       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996720       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003190       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003239       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003261       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003479       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003516       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023328       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023430       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023445       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023460       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:10.023613       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:10.023647       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030526       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030616       1 main.go:227] handling current node
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030632       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030641       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030856       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:30.038164       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:30.038263       1 main.go:227] handling current node
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:30.038278       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:30.038287       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:30.038931       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:30.039072       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:40.053866       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:40.053915       1 main.go:227] handling current node
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:40.053929       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:40.053936       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:40.054259       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:40.054295       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066490       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066542       1 main.go:227] handling current node
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066560       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066567       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.067066       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:10:50.067210       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.075901       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.076052       1 main.go:227] handling current node
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.076069       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.076078       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.087907       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088124       1 main.go:227] handling current node
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088140       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088148       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088875       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:10.088954       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:10.089178       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:20.103399       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:20.103511       1 main.go:227] handling current node
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:20.103528       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759029    4352 command_runner.go:130] ! I0501 04:11:20.103538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759029    4352 command_runner.go:130] ! I0501 04:11:20.103879       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759029    4352 command_runner.go:130] ! I0501 04:11:20.103916       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759067    4352 command_runner.go:130] ! I0501 04:11:30.114473       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.115083       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.115256       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.115463       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.116474       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.116611       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124324       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124371       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124384       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124392       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124558       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124570       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138059       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138102       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138116       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138123       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138826       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138936       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155704       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155799       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155823       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155832       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.156502       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.156549       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164706       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164754       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164767       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164774       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164887       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.165094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.178957       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179142       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179178       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179694       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759992    4352 command_runner.go:130] ! I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759992    4352 command_runner.go:130] ! I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.779907    4352 logs.go:123] Gathering logs for dmesg ...
	I0501 04:16:51.779907    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 04:16:51.808858    4352 command_runner.go:130] > [May 1 04:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0501 04:16:51.808950    4352 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0501 04:16:51.808950    4352 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.128235] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.023886] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.057986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.022012] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0501 04:16:51.808990    4352 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0501 04:16:51.809132    4352 command_runner.go:130] > [May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0501 04:16:51.809225    4352 command_runner.go:130] > [ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0501 04:16:51.809225    4352 command_runner.go:130] > [  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0501 04:16:51.809259    4352 command_runner.go:130] > [May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	I0501 04:16:51.809306    4352 command_runner.go:130] > [  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	I0501 04:16:51.809306    4352 command_runner.go:130] > [  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	I0501 04:16:51.809340    4352 command_runner.go:130] > [  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	I0501 04:16:51.809340    4352 command_runner.go:130] > [  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	I0501 04:16:51.809387    4352 command_runner.go:130] > [  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	I0501 04:16:51.809387    4352 command_runner.go:130] > [  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	I0501 04:16:51.809387    4352 command_runner.go:130] > [  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	I0501 04:16:51.809421    4352 command_runner.go:130] > [  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	I0501 04:16:51.809421    4352 command_runner.go:130] > [  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0501 04:16:51.809421    4352 command_runner.go:130] > [  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	I0501 04:16:51.809473    4352 command_runner.go:130] > [  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0501 04:16:51.809473    4352 command_runner.go:130] > [  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	I0501 04:16:51.809511    4352 command_runner.go:130] > [  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	I0501 04:16:51.809511    4352 command_runner.go:130] > [  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	I0501 04:16:51.809511    4352 command_runner.go:130] > [  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	I0501 04:16:51.812577    4352 logs.go:123] Gathering logs for describe nodes ...
	I0501 04:16:51.813154    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 04:16:52.084833    4352 command_runner.go:130] > Name:               multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] > Roles:              control-plane
	I0501 04:16:52.084833    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0501 04:16:52.084833    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:52.084833    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	I0501 04:16:52.084833    4352 command_runner.go:130] > Taints:             <none>
	I0501 04:16:52.084833    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:52.084833    4352 command_runner.go:130] > Lease:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:52.084833    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:16:43 +0000
	I0501 04:16:52.084833    4352 command_runner.go:130] > Conditions:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0501 04:16:52.084833    4352 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0501 04:16:52.084833    4352 command_runner.go:130] >   MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0501 04:16:52.084833    4352 command_runner.go:130] >   DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0501 04:16:52.084833    4352 command_runner.go:130] >   PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0501 04:16:52.084833    4352 command_runner.go:130] >   Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	I0501 04:16:52.084833    4352 command_runner.go:130] > Addresses:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   InternalIP:  172.28.209.199
	I0501 04:16:52.084833    4352 command_runner.go:130] >   Hostname:    multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] > Capacity:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.084833    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.084833    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.084833    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.084833    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.084833    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.084833    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.084833    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.084833    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.085420    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.085420    4352 command_runner.go:130] > System Info:
	I0501 04:16:52.085420    4352 command_runner.go:130] >   Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	I0501 04:16:52.085420    4352 command_runner.go:130] >   System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:52.085474    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:52.085543    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:52.085543    4352 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0501 04:16:52.085543    4352 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0501 04:16:52.085581    4352 command_runner.go:130] > Non-terminated Pods:          (10 in total)
	I0501 04:16:52.085581    4352 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:52.085636    4352 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0501 04:16:52.085636    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-cc6mk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0501 04:16:52.085670    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 etcd-multinode-289800                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         70s
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kindnet-vcxkr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-289800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-289800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-proxy-bp9zx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-289800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:52.085702    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Resource           Requests     Limits
	I0501 04:16:52.085702    4352 command_runner.go:130] >   --------           --------     ------
	I0501 04:16:52.085702    4352 command_runner.go:130] >   cpu                950m (47%)   100m (5%)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   memory             290Mi (13%)  390Mi (18%)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0501 04:16:52.085702    4352 command_runner.go:130] > Events:
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:52.085702    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-289800 status is now: NodeReady
	I0501 04:16:52.086256    4352 command_runner.go:130] >   Normal  Starting                 76s                kubelet          Starting kubelet.
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 76s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 76s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 76s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:52.086305    4352 command_runner.go:130] > Name:               multinode-289800-m02
	I0501 04:16:52.086398    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:52.086398    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:52.086398    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:52.086398    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m02
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:52.086502    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:52.086502    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	I0501 04:16:52.086502    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:52.086545    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:52.086586    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:52.086586    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:52.086628    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	I0501 04:16:52.086628    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:52.086628    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:52.086701    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:52.086701    4352 command_runner.go:130] > Lease:
	I0501 04:16:52.086701    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m02
	I0501 04:16:52.086701    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:52.086701    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	I0501 04:16:52.086701    4352 command_runner.go:130] > Conditions:
	I0501 04:16:52.086748    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:52.086785    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:52.086817    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086817    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086817    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086881    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086881    4352 command_runner.go:130] > Addresses:
	I0501 04:16:52.086881    4352 command_runner.go:130] >   InternalIP:  172.28.219.162
	I0501 04:16:52.086881    4352 command_runner.go:130] >   Hostname:    multinode-289800-m02
	I0501 04:16:52.086923    4352 command_runner.go:130] > Capacity:
	I0501 04:16:52.086923    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.086923    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.086923    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.086923    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.086973    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.086973    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:52.086973    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.086973    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.087016    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.087016    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.087016    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.087016    4352 command_runner.go:130] > System Info:
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	I0501 04:16:52.087016    4352 command_runner.go:130] >   System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:52.087016    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:52.087551    4352 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0501 04:16:52.087551    4352 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0501 04:16:52.087551    4352 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0501 04:16:52.087597    4352 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:52.087597    4352 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0501 04:16:52.087671    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-tbxxx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0501 04:16:52.087671    4352 command_runner.go:130] >   kube-system                 kindnet-gzz7p              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0501 04:16:52.087708    4352 command_runner.go:130] >   kube-system                 kube-proxy-rlzp8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0501 04:16:52.087708    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:52.087750    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:52.087750    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:52.087750    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:52.087786    4352 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0501 04:16:52.087786    4352 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0501 04:16:52.087786    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0501 04:16:52.087827    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0501 04:16:52.087827    4352 command_runner.go:130] > Events:
	I0501 04:16:52.087827    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:52.087827    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:52.087827    4352 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0501 04:16:52.087883    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	I0501 04:16:52.087883    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.087931    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	I0501 04:16:52.087931    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.087931    4352 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:52.087988    4352 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	I0501 04:16:52.087988    4352 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:52.088030    4352 command_runner.go:130] >   Normal  NodeNotReady             17s                node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	I0501 04:16:52.088030    4352 command_runner.go:130] > Name:               multinode-289800-m03
	I0501 04:16:52.088030    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:52.088084    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:52.088084    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:52.088084    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m03
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:52.088193    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:52.088193    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	I0501 04:16:52.088234    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:52.088234    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:52.088274    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:52.088274    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:52.088274    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	I0501 04:16:52.088314    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:52.088314    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:52.088314    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:52.088314    4352 command_runner.go:130] > Lease:
	I0501 04:16:52.088365    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m03
	I0501 04:16:52.088365    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:52.088365    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	I0501 04:16:52.088365    4352 command_runner.go:130] > Conditions:
	I0501 04:16:52.088406    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:52.088406    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:52.088446    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] > Addresses:
	I0501 04:16:52.088538    4352 command_runner.go:130] >   InternalIP:  172.28.223.145
	I0501 04:16:52.088538    4352 command_runner.go:130] >   Hostname:    multinode-289800-m03
	I0501 04:16:52.088538    4352 command_runner.go:130] > Capacity:
	I0501 04:16:52.088579    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.088579    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.088579    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.088579    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.088579    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.088620    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:52.088620    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.088620    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.088661    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.088661    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.088661    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.088712    4352 command_runner.go:130] > System Info:
	I0501 04:16:52.088712    4352 command_runner.go:130] >   Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	I0501 04:16:52.088712    4352 command_runner.go:130] >   System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	I0501 04:16:52.088753    4352 command_runner.go:130] >   Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	I0501 04:16:52.088753    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:52.088793    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:52.088793    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:52.088793    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:52.088833    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:52.088833    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:52.088833    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:52.088833    4352 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0501 04:16:52.088833    4352 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0501 04:16:52.088902    4352 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0501 04:16:52.088944    4352 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:52.088944    4352 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0501 04:16:52.088986    4352 command_runner.go:130] >   kube-system                 kindnet-4m5vg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0501 04:16:52.088986    4352 command_runner.go:130] >   kube-system                 kube-proxy-g8mbm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0501 04:16:52.088986    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:52.089028    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:52.089028    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:52.089028    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:52.089028    4352 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0501 04:16:52.089028    4352 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0501 04:16:52.089081    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0501 04:16:52.089081    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0501 04:16:52.089081    4352 command_runner.go:130] > Events:
	I0501 04:16:52.089081    4352 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0501 04:16:52.089159    4352 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0501 04:16:52.089193    4352 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0501 04:16:52.089193    4352 command_runner.go:130] >   Normal  Starting                 5m44s                  kube-proxy       
	I0501 04:16:52.089252    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.089252    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:52.089285    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.089315    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:52.089315    4352 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:52.089366    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:52.089366    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.089406    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:52.089406    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m48s                  kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.089457    4352 command_runner.go:130] >   Normal  RegisteredNode           5m43s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:52.089457    4352 command_runner.go:130] >   Normal  NodeReady                5m41s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:52.089497    4352 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	I0501 04:16:52.089497    4352 command_runner.go:130] >   Normal  RegisteredNode           57s                    node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:52.099507    4352 logs.go:123] Gathering logs for coredns [3e8d5ff9a9e4] ...
	I0501 04:16:52.099507    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8d5ff9a9e4"
	I0501 04:16:52.144980    4352 command_runner.go:130] > .:53
	I0501 04:16:52.145028    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:52.145160    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:52.145160    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:52.145160    4352 command_runner.go:130] > [INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	I0501 04:16:52.145160    4352 command_runner.go:130] > [INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	I0501 04:16:52.145160    4352 command_runner.go:130] > [INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	I0501 04:16:52.145408    4352 command_runner.go:130] > [INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	I0501 04:16:52.145408    4352 command_runner.go:130] > [INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	I0501 04:16:52.145408    4352 command_runner.go:130] > [INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	I0501 04:16:52.145503    4352 command_runner.go:130] > [INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	I0501 04:16:52.145503    4352 command_runner.go:130] > [INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	I0501 04:16:52.145503    4352 command_runner.go:130] > [INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	I0501 04:16:52.145720    4352 command_runner.go:130] > [INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	I0501 04:16:52.145720    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:52.145720    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:52.147441    4352 logs.go:123] Gathering logs for container status ...
	I0501 04:16:52.147441    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 04:16:52.225239    4352 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0501 04:16:52.225301    4352 command_runner.go:130] > 1efd236274eb6       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	I0501 04:16:52.225301    4352 command_runner.go:130] > b8a9b405d76be       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:52.225301    4352 command_runner.go:130] > 8a0208aeafcfe       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:52.225301    4352 command_runner.go:130] > 239a5dfd3ae52       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	I0501 04:16:52.225894    4352 command_runner.go:130] > b7cae3f6b88bc       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	I0501 04:16:52.225894    4352 command_runner.go:130] > 01deddefba52a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	I0501 04:16:52.225894    4352 command_runner.go:130] > 3efcc92f817ee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	I0501 04:16:52.226000    4352 command_runner.go:130] > 34892fdb68983       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 18cd30f3ad28f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 66a1b89e6733f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > eaf69fce5ee36       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	I0501 04:16:52.226081    4352 command_runner.go:130] > 15c4496e3a9f0       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:52.226081    4352 command_runner.go:130] > 3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:52.226081    4352 command_runner.go:130] > 6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago       Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	I0501 04:16:52.226081    4352 command_runner.go:130] > 502684407b0cf       a0bf559e280cf                                                                                         24 minutes ago       Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	I0501 04:16:52.226081    4352 command_runner.go:130] > 4b62556f40bec       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 06f1f84bfde17       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	I0501 04:16:52.233427    4352 logs.go:123] Gathering logs for kubelet ...
	I0501 04:16:52.234031    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 04:16:52.273162    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.273678    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875075    1383 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.273732    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875223    1383 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.273732    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.876800    1383 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.273789    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: E0501 04:15:32.877636    1383 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:52.273826    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:52.273850    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:52.273850    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:52.273910    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.273910    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.273954    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.593311    1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.273954    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.595065    1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.274008    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.597316    1424 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.274008    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: E0501 04:15:33.597441    1424 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:52.274050    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:52.274050    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:52.274050    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:52.274097    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274138    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274138    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327211    1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.274184    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327674    1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.274184    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.328505    1461 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.274226    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: E0501 04:15:34.328669    1461 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:52.274226    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:52.274281    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:52.274281    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274322    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274322    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.796836    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.274376    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797219    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.274432    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797640    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.274432    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.799493    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0501 04:16:52.274485    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.812278    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846443    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846668    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847577    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847671    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-289800","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848600    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848674    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.849347    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851250    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851388    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851480    1525 kubelet.go:312] "Adding apiserver pod source"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.852014    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.863109    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.868847    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869729    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.870640    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.871055    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869620    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.872992    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872208    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.874268    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872162    1525 server.go:1264] "Started kubelet"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.876600    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.878390    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.882899    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.888275    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.209.199:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-289800.17cb4242948ce646  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-289800,UID:multinode-289800,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-289800,},FirstTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,LastTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-2
89800,}"
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.894478    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.899264    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="200ms"
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900556    1525 factory.go:221] Registration of the systemd container factory successfully
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900703    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900931    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.909390    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.922744    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.923300    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961054    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961177    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961311    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962539    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962613    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962649    1525 policy_none.go:49] "None policy: Start"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.965264    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.981258    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.991286    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.994410    1525 state_mem.go:75] "Updated machine memory state"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.001037    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0501 04:16:52.276063    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.005977    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0501 04:16:52.276063    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.012301    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.276063    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.018582    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.020477    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.020620    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.021548    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-289800\" not found"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022495    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0501 04:16:52.276231    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022690    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0501 04:16:52.276231    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022715    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0501 04:16:52.276231    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.022919    1525 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0501 04:16:52.276313    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.028696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.276395    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.028755    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.276395    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.045316    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:52.276460    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:52.276497    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:52.276497    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:52.276497    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:52.276567    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.102048    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="400ms"
	I0501 04:16:52.276567    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.124062    1525 topology_manager.go:215] "Topology Admit Handler" podUID="44d7830a7c97b8c7e460c0508d02be4e" podNamespace="kube-system" podName="kube-scheduler-multinode-289800"
	I0501 04:16:52.276567    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.125237    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8b70cd8d31103a1cfca45e9856766786" podNamespace="kube-system" podName="kube-apiserver-multinode-289800"
	I0501 04:16:52.276651    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.126693    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a17001fd2508d58fea9b1ae465b65254" podNamespace="kube-system" podName="kube-controller-manager-multinode-289800"
	I0501 04:16:52.276651    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.129279    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b12e9024402f49cfac7440d6a2eaf42d" podNamespace="kube-system" podName="etcd-multinode-289800"
	I0501 04:16:52.276651    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132159    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479b3ec741befe4b1eddeb02949bcd198e18fa7dc4c196283e811e273e4edcbd"
	I0501 04:16:52.276769    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132205    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88"
	I0501 04:16:52.276769    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132217    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df6ba73bcf683d21156e67827524b826f94059250b12cf08abd23da8345923a"
	I0501 04:16:52.276804    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132236    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a338ea43bd9b03a0a56c5b614e36fd54cdd707fb4c2f5819a814e4ffd9bdcb65"
	I0501 04:16:52.276804    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.139102    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a1c5b5cdd65332e27f08445a684fc2d2f586ab1b8a2fb2c5c0dfc02b71165"
	I0501 04:16:52.276876    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.158602    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737"
	I0501 04:16:52.276876    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.174190    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bb6a06ed527b42fe74673579e4a788915c66cd3717c52a344c73e0b7d12b34"
	I0501 04:16:52.276876    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.191042    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5"
	I0501 04:16:52.276976    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.208222    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214646    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-ca-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214710    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-k8s-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214752    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-kubeconfig\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214812    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214855    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-data\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214875    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d7830a7c97b8c7e460c0508d02be4e-kubeconfig\") pod \"kube-scheduler-multinode-289800\" (UID: \"44d7830a7c97b8c7e460c0508d02be4e\") " pod="kube-system/kube-scheduler-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214899    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-ca-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214925    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-k8s-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214950    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-flexvolume-dir\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214973    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214994    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-certs\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.222614    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.223837    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.227891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.504248    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="800ms"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.625269    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.625998    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.852634    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.852740    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.063749    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277746    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.063859    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277820    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.260487    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93"
	I0501 04:16:52.277862    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.306204    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="1.6s"
	I0501 04:16:52.277862    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.357883    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277936    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.357983    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277976    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.424248    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277976    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.424377    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.278049    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.428960    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.278049    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.431040    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.278137    4352 command_runner.go:130] > May 01 04:15:40 multinode-289800 kubelet[1525]: I0501 04:15:40.032371    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.278137    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.639150    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-289800"
	I0501 04:16:52.278137    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.640030    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-289800"
	I0501 04:16:52.278217    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.642970    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0501 04:16:52.278217    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.644297    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0501 04:16:52.278298    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.646032    1525 setters.go:580] "Node became not ready" node="multinode-289800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-01T04:15:42Z","lastTransitionTime":"2024-05-01T04:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0501 04:16:52.278298    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.869832    1525 apiserver.go:52] "Watching apiserver"
	I0501 04:16:52.278298    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875356    1525 topology_manager.go:215] "Topology Admit Handler" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w9hq"
	I0501 04:16:52.278380    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875613    1525 topology_manager.go:215] "Topology Admit Handler" podUID="aba82e50-b8f8-40b4-b08a-6d045314d6b6" podNamespace="kube-system" podName="kube-proxy-bp9zx"
	I0501 04:16:52.278380    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875753    1525 topology_manager.go:215] "Topology Admit Handler" podUID="0b91b14d-bed3-4889-b193-db53daccd395" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9zrw"
	I0501 04:16:52.278488    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875936    1525 topology_manager.go:215] "Topology Admit Handler" podUID="72ef61d4-4437-40da-86e7-4d7eb386b6de" podNamespace="kube-system" podName="kindnet-vcxkr"
	I0501 04:16:52.278488    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876061    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5" podNamespace="kube-system" podName="storage-provisioner"
	I0501 04:16:52.278575    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876192    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	I0501 04:16:52.278575    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.876527    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.278656    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.877384    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:52.278656    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.878678    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:52.278656    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.879595    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.278736    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.884364    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.278736    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.910944    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0501 04:16:52.278814    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.938877    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-xtables-lock\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:52.278814    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939029    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8d2a827-d9a6-419a-a076-c7695a16a2b5-tmp\") pod \"storage-provisioner\" (UID: \"b8d2a827-d9a6-419a-a076-c7695a16a2b5\") " pod="kube-system/storage-provisioner"
	I0501 04:16:52.278892    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939149    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-xtables-lock\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:52.278892    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-cni-cfg\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:52.278972    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939318    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-lib-modules\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:52.278972    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-lib-modules\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:52.279130    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940207    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279208    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940401    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440364296 +0000 UTC m=+6.726863016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279208    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940680    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279289    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940822    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440808324 +0000 UTC m=+6.727307144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279289    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.948736    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-289800"
	I0501 04:16:52.279367    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.958916    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.279367    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975690    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279489    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975737    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279489    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.475811436 +0000 UTC m=+6.762310156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279567    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.052812    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17e9f88f256f5527a6565eb2da75f63" path="/var/lib/kubelet/pods/c17e9f88f256f5527a6565eb2da75f63/volumes"
	I0501 04:16:52.279646    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.054400    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7b6f2a7c826774b66af910f598e965" path="/var/lib/kubelet/pods/fc7b6f2a7c826774b66af910f598e965/volumes"
	I0501 04:16:52.279646    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170146    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-289800" podStartSLOduration=1.170112215 podStartE2EDuration="1.170112215s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.140058816 +0000 UTC m=+6.426557536" watchObservedRunningTime="2024-05-01 04:15:43.170112215 +0000 UTC m=+6.456610935"
	I0501 04:16:52.279728    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170304    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-289800" podStartSLOduration=1.170298327 podStartE2EDuration="1.170298327s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.16893474 +0000 UTC m=+6.455433460" watchObservedRunningTime="2024-05-01 04:15:43.170298327 +0000 UTC m=+6.456797147"
	I0501 04:16:52.279728    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444132    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279886    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444229    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444209637 +0000 UTC m=+7.730708457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444591    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444633    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444622763 +0000 UTC m=+7.731121483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.544921    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545047    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545141    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.545110913 +0000 UTC m=+7.831609633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.039213    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.378804    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.401946    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402229    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402476    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.403391    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454688    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454983    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.454902809 +0000 UTC m=+9.741401629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455515    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455560    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.45554895 +0000 UTC m=+9.742047670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.283204    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555732    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555836    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555920    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.55587479 +0000 UTC m=+9.842373510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028227    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028491    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.023829    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486637    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486963    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.486942526 +0000 UTC m=+13.773441346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.488686    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.489077    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.488847647 +0000 UTC m=+13.775346467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587833    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587977    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.588185    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.588160623 +0000 UTC m=+13.874659443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284010    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.027084    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.028397    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:48 multinode-289800 kubelet[1525]: E0501 04:15:48.022969    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.024347    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.025248    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.024175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523387    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523508    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.523488538 +0000 UTC m=+21.809987358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524104    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524150    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.524137716 +0000 UTC m=+21.810636436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.284785    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.624897    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284913    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625357    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625742    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.625719971 +0000 UTC m=+21.912218691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024464    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024959    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:52 multinode-289800 kubelet[1525]: E0501 04:15:52.024016    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.023669    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.024381    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:54 multinode-289800 kubelet[1525]: E0501 04:15:54.023529    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.023399    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.024039    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:56 multinode-289800 kubelet[1525]: E0501 04:15:56.023961    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.024583    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.285533    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.025562    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.285533    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.024494    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606520    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606584    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.606569125 +0000 UTC m=+37.893067945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607052    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607095    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.607084827 +0000 UTC m=+37.893583547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.707959    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708171    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.286243    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708240    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.708221599 +0000 UTC m=+37.994720419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.024158    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.025055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:00 multinode-289800 kubelet[1525]: E0501 04:16:00.023216    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.024905    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.025585    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:02 multinode-289800 kubelet[1525]: E0501 04:16:02.024143    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.023409    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.024062    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:04 multinode-289800 kubelet[1525]: E0501 04:16:04.023182    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.028055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.029254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:06 multinode-289800 kubelet[1525]: E0501 04:16:06.024522    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.024384    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.025431    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:08 multinode-289800 kubelet[1525]: E0501 04:16:08.024168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.024117    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.025560    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:10 multinode-289800 kubelet[1525]: E0501 04:16:10.023881    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.023619    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.024277    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:12 multinode-289800 kubelet[1525]: E0501 04:16:12.024236    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.287472    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023153    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023926    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.023335    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657138    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657461    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.657440103 +0000 UTC m=+69.943938823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657218    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657858    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.65783162 +0000 UTC m=+69.944330440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758303    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758421    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758487    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.758469083 +0000 UTC m=+70.044967903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.023369    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.024797    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.886834    1525 scope.go:117] "RemoveContainer" containerID="ee2238f98e350e8d80528b60fc5b614ce6048d8b34af2034a9947e26d8e6beab"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.887225    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.887510    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b8d2a827-d9a6-419a-a076-c7695a16a2b5)\"" pod="kube-system/storage-provisioner" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: E0501 04:16:16.024360    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: I0501 04:16:16.618138    1525 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 kubelet[1525]: I0501 04:16:29.024408    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.040204    1525 scope.go:117] "RemoveContainer" containerID="3244d1ee5ab428faf09a962609f2c940c36a998727a01b873d382eb5ee600ca3"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	I0501 04:16:52.349088    4352 logs.go:123] Gathering logs for kube-scheduler [eaf69fce5ee3] ...
	I0501 04:16:52.349088    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaf69fce5ee3"
	I0501 04:16:52.379701    4352 command_runner.go:130] ! I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:52.380642    4352 command_runner.go:130] ! W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:52.380693    4352 command_runner.go:130] ! W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:52.380730    4352 command_runner.go:130] ! W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:52.380780    4352 command_runner.go:130] ! W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:52.380780    4352 command_runner.go:130] ! I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:52.382997    4352 logs.go:123] Gathering logs for kube-controller-manager [4b62556f40be] ...
	I0501 04:16:52.382997    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b62556f40be"
	I0501 04:16:52.419922    4352 command_runner.go:130] ! I0501 03:52:09.899238       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:52.419922    4352 command_runner.go:130] ! I0501 03:52:10.399398       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:52.420177    4352 command_runner.go:130] ! I0501 03:52:10.399463       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.408364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.409326       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.409600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.409803       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.177592       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.177638       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.223373       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.223482       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.224504       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.255847       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.268264       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:52.420374    4352 command_runner.go:130] ! I0501 03:52:15.268388       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:52.420374    4352 command_runner.go:130] ! I0501 03:52:15.282022       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:52.420374    4352 command_runner.go:130] ! I0501 03:52:15.318646       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:52.420420    4352 command_runner.go:130] ! I0501 03:52:15.318861       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:52.420480    4352 command_runner.go:130] ! I0501 03:52:15.319086       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:52.420480    4352 command_runner.go:130] ! I0501 03:52:15.319104       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:52.420480    4352 command_runner.go:130] ! I0501 03:52:15.319092       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:52.420523    4352 command_runner.go:130] ! I0501 03:52:15.340327       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:52.420571    4352 command_runner.go:130] ! I0501 03:52:15.340404       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:52.420571    4352 command_runner.go:130] ! I0501 03:52:15.340939       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:52.420607    4352 command_runner.go:130] ! I0501 03:52:15.388809       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:52.420607    4352 command_runner.go:130] ! I0501 03:52:15.389274       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:52.420661    4352 command_runner.go:130] ! I0501 03:52:15.389544       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:52.420661    4352 command_runner.go:130] ! I0501 03:52:15.409254       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:52.420695    4352 command_runner.go:130] ! I0501 03:52:15.409799       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:52.420695    4352 command_runner.go:130] ! I0501 03:52:15.410052       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:52.420695    4352 command_runner.go:130] ! I0501 03:52:15.410231       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:52.420727    4352 command_runner.go:130] ! I0501 03:52:15.430420       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.432551       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.432922       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.433117       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:52.422595    4352 command_runner.go:130] ! E0501 03:52:15.460293       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.460569       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.483810       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.484552       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.487659       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.507112       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.507311       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.507323       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547225       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547300       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547313       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547413       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.652954       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.653222       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.653240       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940714       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940787       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941029       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941118       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941344       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:52.423257    4352 command_runner.go:130] ! I0501 03:52:15.941368       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941421       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941627       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941813       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942319       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942400       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942767       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942791       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.183841       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.184178       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.187151       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.187185       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.436175       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.436331       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.436346       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.586198       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.586602       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.586642       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736534       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736573       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736609       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736694       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.891482       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.891648       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.891663       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:52.423866    4352 command_runner.go:130] ! I0501 03:52:17.047956       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050852       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050942       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051046       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051073       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051107       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051130       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051145       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051548       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051654       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.186932       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.187092       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.350786       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.351166       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.352026       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.353715       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.368884       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.369241       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.369602       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.424182       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.424472       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.436663       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.437080       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.437177       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.448635       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.449170       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.449409       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.475565       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.476051       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:52.424476    4352 command_runner.go:130] ! I0501 03:52:27.476166       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.479486       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.479596       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.479975       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.480750       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.480823       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:52.424591    4352 command_runner.go:130] ! E0501 03:52:27.482546       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.483210       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.495640       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.495973       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.496212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.512223       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.512895       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.513075       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.514982       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.515311       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.515499       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.526940       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.527318       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.527351       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647646       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647752       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647825       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647836       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.692531       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.692762       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.693221       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.693310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.846904       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.847065       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.847083       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.996304       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:52.425167    4352 command_runner.go:130] ! I0501 03:52:27.996661       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:52.425167    4352 command_runner.go:130] ! I0501 03:52:27.996720       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:52.425167    4352 command_runner.go:130] ! I0501 03:52:28.149439       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:52.425331    4352 command_runner.go:130] ! I0501 03:52:28.149690       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:52.425375    4352 command_runner.go:130] ! I0501 03:52:28.149796       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:52.425505    4352 command_runner.go:130] ! I0501 03:52:28.194448       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:52.425505    4352 command_runner.go:130] ! I0501 03:52:28.194582       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:52.425659    4352 command_runner.go:130] ! I0501 03:52:28.346263       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:52.425794    4352 command_runner.go:130] ! I0501 03:52:28.351074       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.351267       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.389327       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.399508       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.401911       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.402772       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.414043       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.415874       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.427291       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.436570       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.437221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.437315       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.440984       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.447483       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.447500       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.448218       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451115       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451224       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451726       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451933       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451734       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.470928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.476835       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.486851       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.487294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.507418       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.510921       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.537591       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.575135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.595083       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.609954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.621070       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.625042       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.628085       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.643871       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.653497       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.654871       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.654996       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.655710       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.655972       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.656192       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.675109       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800" podCIDRs=["10.244.0.0/24"]
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682120       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682855       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.688787       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.693874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.697526       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.088696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.088746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.139257       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:52.427241    4352 command_runner.go:130] ! I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:54.971275    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:16:55.002948    4352 command_runner.go:130] > 1873
	I0501 04:16:55.004048    4352 api_server.go:72] duration metric: took 1m7.1057338s to wait for apiserver process to appear ...
	I0501 04:16:55.004146    4352 api_server.go:88] waiting for apiserver healthz status ...
	I0501 04:16:55.014570    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0501 04:16:55.045902    4352 command_runner.go:130] > 18cd30f3ad28
	I0501 04:16:55.045902    4352 logs.go:276] 1 containers: [18cd30f3ad28]
	I0501 04:16:55.059307    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0501 04:16:55.087490    4352 command_runner.go:130] > 34892fdb6898
	I0501 04:16:55.088578    4352 logs.go:276] 1 containers: [34892fdb6898]
	I0501 04:16:55.100098    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0501 04:16:55.125435    4352 command_runner.go:130] > b8a9b405d76b
	I0501 04:16:55.125435    4352 command_runner.go:130] > 8a0208aeafcf
	I0501 04:16:55.125435    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:16:55.125435    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:16:55.125534    4352 logs.go:276] 4 containers: [b8a9b405d76b 8a0208aeafcf 15c4496e3a9f 3e8d5ff9a9e4]
	I0501 04:16:55.136812    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0501 04:16:55.161323    4352 command_runner.go:130] > eaf69fce5ee3
	I0501 04:16:55.161323    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:16:55.161323    4352 logs.go:276] 2 containers: [eaf69fce5ee3 06f1f84bfde1]
	I0501 04:16:55.171247    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0501 04:16:55.209491    4352 command_runner.go:130] > 3efcc92f817e
	I0501 04:16:55.209538    4352 command_runner.go:130] > 502684407b0c
	I0501 04:16:55.209538    4352 logs.go:276] 2 containers: [3efcc92f817e 502684407b0c]
	I0501 04:16:55.221292    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0501 04:16:55.245849    4352 command_runner.go:130] > 66a1b89e6733
	I0501 04:16:55.245849    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:16:55.247168    4352 logs.go:276] 2 containers: [66a1b89e6733 4b62556f40be]
	I0501 04:16:55.260218    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0501 04:16:55.288049    4352 command_runner.go:130] > b7cae3f6b88b
	I0501 04:16:55.288155    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:16:55.288155    4352 logs.go:276] 2 containers: [b7cae3f6b88b 6d5f881ef398]
	I0501 04:16:55.288236    4352 logs.go:123] Gathering logs for kube-proxy [502684407b0c] ...
	I0501 04:16:55.288236    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502684407b0c"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:55.322427    4352 logs.go:123] Gathering logs for Docker ...
	I0501 04:16:55.322519    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0501 04:16:55.359690    4352 command_runner.go:130] > May 01 04:14:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:55.359876    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.359876    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.359876    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.359944    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:55.359944    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.359984    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.359984    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.653438562Z" level=info msg="Starting up"
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.657791992Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.663198880Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.702542137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732549261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732711054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732864148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732947945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734019203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360562    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734463486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735002764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735178358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735234755Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735254555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735695937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.736590002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739236298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739286896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360871    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739479489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360871    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739575785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740111064Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740186861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740203361Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747848861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747973456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748003155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748021254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748087351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748176348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748553033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748726426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748830822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748853521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748872121Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748887020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361236    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748901420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361236    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748916819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361400    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748932318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361400    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748946618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361400    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748960717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361490    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748974817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748996916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749013215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749071613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749094412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361589    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749109411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361589    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749127511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361589    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749141410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749156310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749171209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749210407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749241106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749261705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749287004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749377501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749401900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749458198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749553894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749626691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:55.361945    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749759886Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:55.362035    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749839283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.362035    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749953278Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:55.362114    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749974077Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750421860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750811045Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751024636Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751103833Z" level=info msg="containerd successfully booted in 0.052926s"
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.725111442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.993003995Z" level=info msg="Loading containers: start."
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.418709237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.511990518Z" level=info msg="Loading containers: done."
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.539659513Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.540534438Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.598935417Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.599463032Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:55.362378    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.764446334Z" level=info msg="Processing signal 'terminated'"
	I0501 04:16:55.362378    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 systemd[1]: Stopping Docker Application Container Engine...
	I0501 04:16:55.362417    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766325752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766547266Z" level=info msg="Daemon shutdown complete"
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766599570Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766627071Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: docker.service: Deactivated successfully.
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Stopped Docker Application Container Engine.
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.848356633Z" level=info msg="Starting up"
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.852105170Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.856097222Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1051
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.886653253Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918280652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918435561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918674977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:55.362701    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918835587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362701    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918914392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.362701    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919007298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362782    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919224411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.362782    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919342019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362782    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919363920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:55.362860    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919374921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362860    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919401422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362860    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919522430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362940    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922355909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.362940    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922472116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362940    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922606725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.363018    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922701131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:55.363018    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922740333Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:55.363018    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922844740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922863441Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923199662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923345572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923371973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923387074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:55.363194    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923416076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:55.363194    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923482380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:55.363194    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923717595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923914208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924012314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924084218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924103120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363358    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924116520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363358    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924137922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363358    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924154823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363440    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924172824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363440    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924195925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363440    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924208026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924219327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924285031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924297632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363602    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924325534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363602    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363602    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924348235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363682    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924360536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363682    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924373137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363682    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924390538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363763    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924403039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363763    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924414139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363763    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924426140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924440741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924459642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924475143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924504745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924545247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924640554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924658655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:55.364031    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924671555Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:55.364031    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924736560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.364120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924890569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:55.364120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924908370Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925252392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925540810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925606615Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925720522Z" level=info msg="containerd successfully booted in 0.040328s"
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.902259635Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:55.364293    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.938734241Z" level=info msg="Loading containers: start."
	I0501 04:16:55.364293    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.252276255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:55.364293    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.346319398Z" level=info msg="Loading containers: done."
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374198460Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374439776Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424572544Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424740154Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Loaded network plugin cni"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0501 04:16:55.364716    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0501 04:16:55.364716    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.364716    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-8w9hq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88\""
	I0501 04:16:55.364803    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-cc6mk_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5\""
	I0501 04:16:55.364803    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-x9zrw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193\""
	I0501 04:16:55.364892    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.812954162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813140474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813251281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813750813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908552604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908932028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908977330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.909354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8e27176eab83655d3f2a52c63326669ef8c796c68155930f53f421789d826f1/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022633513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022720619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022735220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.024008700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032104212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032117713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032205718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fd53aa8d8f5d6402b604adf1c8c8ae2b5a8c80b90e94152f45e7cb16a71fe46/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51e331e75da779107616d5efa0d497152d9c85407f1c172c9ae536bcc2b22bad/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.361204210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366294631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366382437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366929671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427356590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427966129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428178542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428971092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563334483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563717708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568278296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568462908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619028803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619423228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619676644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.620258481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.647452681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648388440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648703160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650660084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650945902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.652733715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.653556567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703188303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703325612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703348713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.704951615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160153282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160628512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160751120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.161166246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303671652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303759357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304597710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304856126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623383256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623630372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623719877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.624154405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1045]: time="2024-05-01T04:16:15.086534690Z" level=info msg="ignoring event" container=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087315924Z" level=info msg="shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087789544Z" level=warning msg="cleaning up after shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.089400515Z" level=info msg="cleaning up dead shim" namespace=moby
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233206077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366830    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233350185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366865    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233373086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366865    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.235465402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366865    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.458837761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366947    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.459864323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464281891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464897329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543149980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543283788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543320690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543548404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598181021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598854262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.599065375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.600816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b85f507755ab5fd65a5328f5567d969dd5f974c01ee4c5d8e38f03dc6ec900a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.282921443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283150129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283743193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.291296831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360201124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360588900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360677995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.361100969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575166498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575320589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.576248232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367716    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367780    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367805    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367805    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367851    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367893    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367893    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.406201    4352 logs.go:123] Gathering logs for coredns [3e8d5ff9a9e4] ...
	I0501 04:16:55.406201    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8d5ff9a9e4"
	I0501 04:16:55.441447    4352 command_runner.go:130] > .:53
	I0501 04:16:55.441447    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:55.441447    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:55.441447    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	I0501 04:16:55.441730    4352 command_runner.go:130] > [INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	I0501 04:16:55.441730    4352 command_runner.go:130] > [INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	I0501 04:16:55.441880    4352 command_runner.go:130] > [INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	I0501 04:16:55.441880    4352 command_runner.go:130] > [INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	I0501 04:16:55.441925    4352 command_runner.go:130] > [INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	I0501 04:16:55.441925    4352 command_runner.go:130] > [INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	I0501 04:16:55.441968    4352 command_runner.go:130] > [INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	I0501 04:16:55.441968    4352 command_runner.go:130] > [INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	I0501 04:16:55.441968    4352 command_runner.go:130] > [INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	I0501 04:16:55.442011    4352 command_runner.go:130] > [INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	I0501 04:16:55.442011    4352 command_runner.go:130] > [INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	I0501 04:16:55.442011    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:55.442069    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:55.444176    4352 logs.go:123] Gathering logs for coredns [15c4496e3a9f] ...
	I0501 04:16:55.444211    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15c4496e3a9f"
	I0501 04:16:55.477397    4352 command_runner.go:130] > .:53
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:55.477478    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:55.477478    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:55.479510    4352 logs.go:123] Gathering logs for kube-scheduler [eaf69fce5ee3] ...
	I0501 04:16:55.479541    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaf69fce5ee3"
	I0501 04:16:55.510830    4352 command_runner.go:130] ! I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:55.511324    4352 command_runner.go:130] ! W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:55.511401    4352 command_runner.go:130] ! W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:55.511401    4352 command_runner.go:130] ! W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.514328    4352 logs.go:123] Gathering logs for kube-scheduler [06f1f84bfde1] ...
	I0501 04:16:55.514328    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f1f84bfde1"
	I0501 04:16:55.550601    4352 command_runner.go:130] ! I0501 03:52:10.476758       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175551       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175587       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.246151       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.246312       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.251800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.252170       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.253709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.254160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.257352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.257411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.261549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.261670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.263856       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.263906       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.270038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.270597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.271080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.271309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.271808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.551240    4352 command_runner.go:130] ! E0501 03:52:12.272098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.551291    4352 command_runner.go:130] ! W0501 03:52:12.272396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.551291    4352 command_runner.go:130] ! W0501 03:52:12.273177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.551356    4352 command_runner.go:130] ! E0501 03:52:12.273396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.551393    4352 command_runner.go:130] ! W0501 03:52:12.273765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.273964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.274273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.274741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.275083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.275448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.275752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.276841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.278071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.277438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.278555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.279824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.280326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.280272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.280893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.551917    4352 command_runner.go:130] ! W0501 03:52:13.100723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551969    4352 command_runner.go:130] ! E0501 03:52:13.101238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551969    4352 command_runner.go:130] ! W0501 03:52:13.102451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.551969    4352 command_runner.go:130] ! E0501 03:52:13.102804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.552081    4352 command_runner.go:130] ! W0501 03:52:13.188414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.552081    4352 command_runner.go:130] ! E0501 03:52:13.188662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.552139    4352 command_runner.go:130] ! W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.552139    4352 command_runner.go:130] ! E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.552202    4352 command_runner.go:130] ! W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.552238    4352 command_runner.go:130] ! E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.552238    4352 command_runner.go:130] ! W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.552238    4352 command_runner.go:130] ! E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.552332    4352 command_runner.go:130] ! W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.552370    4352 command_runner.go:130] ! E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.552370    4352 command_runner.go:130] ! W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.552419    4352 command_runner.go:130] ! E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.552456    4352 command_runner.go:130] ! W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552508    4352 command_runner.go:130] ! E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552508    4352 command_runner.go:130] ! W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552542    4352 command_runner.go:130] ! E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552579    4352 command_runner.go:130] ! W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.552613    4352 command_runner.go:130] ! E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.552668    4352 command_runner.go:130] ! W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.552710    4352 command_runner.go:130] ! E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.552763    4352 command_runner.go:130] ! W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552805    4352 command_runner.go:130] ! E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552805    4352 command_runner.go:130] ! W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.552873    4352 command_runner.go:130] ! E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.552909    4352 command_runner.go:130] ! W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.552909    4352 command_runner.go:130] ! E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.552951    4352 command_runner.go:130] ! I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.552951    4352 command_runner.go:130] ! E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	I0501 04:16:55.565171    4352 logs.go:123] Gathering logs for kube-proxy [3efcc92f817e] ...
	I0501 04:16:55.565171    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efcc92f817e"
	I0501 04:16:55.596340    4352 command_runner.go:130] ! I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:55.596430    4352 command_runner.go:130] ! I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:16:55.596688    4352 command_runner.go:130] ! I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:55.599371    4352 logs.go:123] Gathering logs for kube-controller-manager [4b62556f40be] ...
	I0501 04:16:55.599450    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b62556f40be"
	I0501 04:16:55.632589    4352 command_runner.go:130] ! I0501 03:52:09.899238       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.399398       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.399463       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.408364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.409326       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.409600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.409803       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.177592       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.177638       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.223373       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.223482       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.224504       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.255847       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.268264       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:55.633373    4352 command_runner.go:130] ! I0501 03:52:15.268388       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.282022       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.318646       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.318861       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.319086       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.319104       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.319092       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.340327       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.340404       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.340939       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.388809       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.389274       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.389544       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.409254       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.409799       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.410052       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.410231       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.430420       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.432551       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.432922       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.433117       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:55.633437    4352 command_runner.go:130] ! E0501 03:52:15.460293       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.460569       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:55.634091    4352 command_runner.go:130] ! I0501 03:52:15.483810       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:55.634285    4352 command_runner.go:130] ! I0501 03:52:15.484552       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:55.634368    4352 command_runner.go:130] ! I0501 03:52:15.487659       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:55.634477    4352 command_runner.go:130] ! I0501 03:52:15.507112       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:55.634681    4352 command_runner.go:130] ! I0501 03:52:15.507311       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:55.634812    4352 command_runner.go:130] ! I0501 03:52:15.507323       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:55.634901    4352 command_runner.go:130] ! I0501 03:52:15.547225       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.547300       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.547313       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.547413       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.652954       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.653222       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.653240       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940714       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940787       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.941029       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:55.635497    4352 command_runner.go:130] ! I0501 03:52:15.941118       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:55.635617    4352 command_runner.go:130] ! I0501 03:52:15.941275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941344       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941368       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941421       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941627       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941813       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.942150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.942270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:55.636433    4352 command_runner.go:130] ! I0501 03:52:15.942319       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:55.636549    4352 command_runner.go:130] ! I0501 03:52:15.942400       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:15.942767       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:15.942791       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.183841       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.184178       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.187151       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.187185       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.436175       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.436331       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.436346       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.586198       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.586602       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.586642       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736534       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736573       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736609       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736694       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.891482       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.891648       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.891663       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.047956       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050852       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050942       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:55.637219    4352 command_runner.go:130] ! I0501 03:52:17.051046       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:55.637219    4352 command_runner.go:130] ! I0501 03:52:17.051073       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051107       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051130       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051145       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051548       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051654       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.186932       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.187092       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.350786       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.351166       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.352026       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.353715       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.368884       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.369241       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.369602       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.424182       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.424472       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.436663       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.437080       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.437177       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.448635       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.449170       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.449409       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.475565       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.476051       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:55.637869    4352 command_runner.go:130] ! I0501 03:52:27.476166       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.479486       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.479596       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.479975       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.480750       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.480823       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:55.637913    4352 command_runner.go:130] ! E0501 03:52:27.482546       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.483210       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.495640       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.495973       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:55.638542    4352 command_runner.go:130] ! I0501 03:52:27.496212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:55.638662    4352 command_runner.go:130] ! I0501 03:52:27.512223       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:55.638662    4352 command_runner.go:130] ! I0501 03:52:27.512895       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.513075       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.514982       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.515311       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.515499       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.526940       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.527318       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.527351       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647646       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647752       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647825       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647836       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.692531       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.692762       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.693221       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.693310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.846904       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.847065       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.847083       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.996304       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.996661       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.996720       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.149439       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.149690       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.149796       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.194448       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.194582       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.346263       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.351074       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.351267       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.389327       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.399508       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.401911       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:55.639599    4352 command_runner.go:130] ! I0501 03:52:28.402772       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:55.639672    4352 command_runner.go:130] ! I0501 03:52:28.414043       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:55.639969    4352 command_runner.go:130] ! I0501 03:52:28.415874       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:55.640124    4352 command_runner.go:130] ! I0501 03:52:28.427291       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:55.640201    4352 command_runner.go:130] ! I0501 03:52:28.436570       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.437221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.437315       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.440984       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.447483       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.447500       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.448218       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451115       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451224       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451726       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451933       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451734       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.470928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.476835       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.486851       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.487294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.507418       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.510921       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.537591       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.575135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.595083       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.609954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.621070       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.625042       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.628085       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.643871       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.653497       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.654871       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.654996       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.655710       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.655972       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.656192       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:55.640857    4352 command_runner.go:130] ! I0501 03:52:28.675109       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800" podCIDRs=["10.244.0.0/24"]
	I0501 04:16:55.640857    4352 command_runner.go:130] ! I0501 03:52:28.682120       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:55.640857    4352 command_runner.go:130] ! I0501 03:52:28.682644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.682782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.682855       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.688787       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.693874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.697526       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.088696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.088746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.139257       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 04:16:55.641555    4352 command_runner.go:130] ! I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:55.641555    4352 command_runner.go:130] ! I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641666    4352 command_runner.go:130] ! I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 04:16:55.641884    4352 command_runner.go:130] ! I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.663350    4352 logs.go:123] Gathering logs for kindnet [b7cae3f6b88b] ...
	I0501 04:16:55.663350    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7cae3f6b88b"
	I0501 04:16:55.694512    4352 command_runner.go:130] ! I0501 04:15:45.341459       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 04:16:55.695273    4352 command_runner.go:130] ! I0501 04:15:45.342196       1 main.go:107] hostIP = 172.28.209.199
	I0501 04:16:55.695338    4352 command_runner.go:130] ! podIP = 172.28.209.199
	I0501 04:16:55.695338    4352 command_runner.go:130] ! I0501 04:15:45.343348       1 main.go:116] setting mtu 1500 for CNI 
	I0501 04:16:55.695338    4352 command_runner.go:130] ! I0501 04:15:45.343391       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:15:45.343412       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.765193       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.817499       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.817549       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818026       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818042       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818289       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.219.162 Flags: [] Table: 0} 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818416       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818477       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818548       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.834949       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.834995       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835008       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835016       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835192       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835220       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845752       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845835       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845848       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845856       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.846322       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.846423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855212       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855323       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855339       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855347       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.856266       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.856305       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.698228    4352 logs.go:123] Gathering logs for container status ...
	I0501 04:16:55.698490    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 04:16:55.777760    4352 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0501 04:16:55.777760    4352 command_runner.go:130] > 1efd236274eb6       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	I0501 04:16:55.777760    4352 command_runner.go:130] > b8a9b405d76be       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:55.777926    4352 command_runner.go:130] > 8a0208aeafcfe       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:55.777987    4352 command_runner.go:130] > 239a5dfd3ae52       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	I0501 04:16:55.777987    4352 command_runner.go:130] > b7cae3f6b88bc       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	I0501 04:16:55.778146    4352 command_runner.go:130] > 01deddefba52a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	I0501 04:16:55.778146    4352 command_runner.go:130] > 3efcc92f817ee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	I0501 04:16:55.778253    4352 command_runner.go:130] > 34892fdb68983       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	I0501 04:16:55.778253    4352 command_runner.go:130] > 18cd30f3ad28f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	I0501 04:16:55.778403    4352 command_runner.go:130] > 66a1b89e6733f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	I0501 04:16:55.778403    4352 command_runner.go:130] > eaf69fce5ee36       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	I0501 04:16:55.778519    4352 command_runner.go:130] > 237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	I0501 04:16:55.778519    4352 command_runner.go:130] > 15c4496e3a9f0       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:55.778519    4352 command_runner.go:130] > 3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:55.778651    4352 command_runner.go:130] > 6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago       Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	I0501 04:16:55.778651    4352 command_runner.go:130] > 502684407b0cf       a0bf559e280cf                                                                                         24 minutes ago       Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	I0501 04:16:55.778766    4352 command_runner.go:130] > 4b62556f40bec       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	I0501 04:16:55.778880    4352 command_runner.go:130] > 06f1f84bfde17       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	I0501 04:16:55.783727    4352 logs.go:123] Gathering logs for kubelet ...
	I0501 04:16:55.783967    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 04:16:55.823599    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823639    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875075    1383 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823639    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875223    1383 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823639    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.876800    1383 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: E0501 04:15:32.877636    1383 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823818    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823843    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.593311    1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.595065    1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.597316    1424 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: E0501 04:15:33.597441    1424 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327211    1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327674    1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.328505    1461 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: E0501 04:15:34.328669    1461 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.796836    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797219    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797640    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.799493    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.812278    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846443    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846668    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847577    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847671    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-289800","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848600    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0501 04:16:55.824394    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848674    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0501 04:16:55.824394    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.849347    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:55.824445    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851250    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0501 04:16:55.824445    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851388    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0501 04:16:55.824498    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851480    1525 kubelet.go:312] "Adding apiserver pod source"
	I0501 04:16:55.824524    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.852014    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0501 04:16:55.824560    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.863109    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0501 04:16:55.824560    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.868847    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0501 04:16:55.824617    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869729    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0501 04:16:55.824686    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.870640    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.871055    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869620    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.872992    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872208    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.874268    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872162    1525 server.go:1264] "Started kubelet"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.876600    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.878390    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.882899    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.888275    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.209.199:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-289800.17cb4242948ce646  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-289800,UID:multinode-289800,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-289800,},FirstTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,LastTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-2
89800,}"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.894478    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.899264    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="200ms"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900556    1525 factory.go:221] Registration of the systemd container factory successfully
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900703    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900931    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.909390    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.922744    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.923300    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961054    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961177    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961311    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962539    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962613    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962649    1525 policy_none.go:49] "None policy: Start"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.965264    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.981258    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.991286    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0501 04:16:55.825395    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.994410    1525 state_mem.go:75] "Updated machine memory state"
	I0501 04:16:55.825395    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.001037    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0501 04:16:55.825438    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.005977    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0501 04:16:55.825438    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.012301    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.825513    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.018582    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0501 04:16:55.825513    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.020477    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0501 04:16:55.825578    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.020620    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.825608    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.021548    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-289800\" not found"
	I0501 04:16:55.825638    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022495    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0501 04:16:55.825672    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022690    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0501 04:16:55.825672    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022715    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0501 04:16:55.825733    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.022919    1525 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0501 04:16:55.825775    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.028696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.825825    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.028755    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.045316    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:55.825980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:55.825980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.102048    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="400ms"
	I0501 04:16:55.825980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.124062    1525 topology_manager.go:215] "Topology Admit Handler" podUID="44d7830a7c97b8c7e460c0508d02be4e" podNamespace="kube-system" podName="kube-scheduler-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.125237    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8b70cd8d31103a1cfca45e9856766786" podNamespace="kube-system" podName="kube-apiserver-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.126693    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a17001fd2508d58fea9b1ae465b65254" podNamespace="kube-system" podName="kube-controller-manager-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.129279    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b12e9024402f49cfac7440d6a2eaf42d" podNamespace="kube-system" podName="etcd-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132159    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479b3ec741befe4b1eddeb02949bcd198e18fa7dc4c196283e811e273e4edcbd"
	I0501 04:16:55.826180    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132205    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88"
	I0501 04:16:55.826217    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132217    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df6ba73bcf683d21156e67827524b826f94059250b12cf08abd23da8345923a"
	I0501 04:16:55.826252    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132236    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a338ea43bd9b03a0a56c5b614e36fd54cdd707fb4c2f5819a814e4ffd9bdcb65"
	I0501 04:16:55.826252    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.139102    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a1c5b5cdd65332e27f08445a684fc2d2f586ab1b8a2fb2c5c0dfc02b71165"
	I0501 04:16:55.826326    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.158602    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737"
	I0501 04:16:55.826357    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.174190    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bb6a06ed527b42fe74673579e4a788915c66cd3717c52a344c73e0b7d12b34"
	I0501 04:16:55.826357    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.191042    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5"
	I0501 04:16:55.826408    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.208222    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193"
	I0501 04:16:55.826450    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214646    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-ca-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826507    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214710    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-k8s-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826551    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214752    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-kubeconfig\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826604    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214812    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.826604    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214855    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-data\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:55.826649    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214875    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d7830a7c97b8c7e460c0508d02be4e-kubeconfig\") pod \"kube-scheduler-multinode-289800\" (UID: \"44d7830a7c97b8c7e460c0508d02be4e\") " pod="kube-system/kube-scheduler-multinode-289800"
	I0501 04:16:55.826693    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214899    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-ca-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.826729    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214925    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-k8s-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.826801    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214950    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-flexvolume-dir\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826848    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214973    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826917    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214994    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-certs\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:55.826917    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.222614    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.826917    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.223837    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.826980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.227891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8"
	I0501 04:16:55.826980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.504248    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="800ms"
	I0501 04:16:55.826980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.625269    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.827080    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.625998    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.827124    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.852634    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827158    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.852740    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827211    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.063749    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827254    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.063859    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827352    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.260487    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93"
	I0501 04:16:55.827398    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.306204    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="1.6s"
	I0501 04:16:55.827398    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.357883    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827481    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.357983    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827522    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.424248    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827559    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.424377    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827559    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.428960    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.431040    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:40 multinode-289800 kubelet[1525]: I0501 04:15:40.032371    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.639150    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.640030    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.642970    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.644297    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.646032    1525 setters.go:580] "Node became not ready" node="multinode-289800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-01T04:15:42Z","lastTransitionTime":"2024-05-01T04:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.869832    1525 apiserver.go:52] "Watching apiserver"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875356    1525 topology_manager.go:215] "Topology Admit Handler" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w9hq"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875613    1525 topology_manager.go:215] "Topology Admit Handler" podUID="aba82e50-b8f8-40b4-b08a-6d045314d6b6" podNamespace="kube-system" podName="kube-proxy-bp9zx"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875753    1525 topology_manager.go:215] "Topology Admit Handler" podUID="0b91b14d-bed3-4889-b193-db53daccd395" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9zrw"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875936    1525 topology_manager.go:215] "Topology Admit Handler" podUID="72ef61d4-4437-40da-86e7-4d7eb386b6de" podNamespace="kube-system" podName="kindnet-vcxkr"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876061    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5" podNamespace="kube-system" podName="storage-provisioner"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876192    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.876527    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.877384    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.878678    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.879595    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.884364    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.910944    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0501 04:16:55.828255    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.938877    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-xtables-lock\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:55.828255    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939029    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8d2a827-d9a6-419a-a076-c7695a16a2b5-tmp\") pod \"storage-provisioner\" (UID: \"b8d2a827-d9a6-419a-a076-c7695a16a2b5\") " pod="kube-system/storage-provisioner"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939149    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-xtables-lock\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-cni-cfg\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939318    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-lib-modules\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-lib-modules\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940207    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940401    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440364296 +0000 UTC m=+6.726863016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940680    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940822    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440808324 +0000 UTC m=+6.727307144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.948736    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-289800"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.958916    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975690    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975737    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.475811436 +0000 UTC m=+6.762310156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.052812    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17e9f88f256f5527a6565eb2da75f63" path="/var/lib/kubelet/pods/c17e9f88f256f5527a6565eb2da75f63/volumes"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.054400    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7b6f2a7c826774b66af910f598e965" path="/var/lib/kubelet/pods/fc7b6f2a7c826774b66af910f598e965/volumes"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170146    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-289800" podStartSLOduration=1.170112215 podStartE2EDuration="1.170112215s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.140058816 +0000 UTC m=+6.426557536" watchObservedRunningTime="2024-05-01 04:15:43.170112215 +0000 UTC m=+6.456610935"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170304    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-289800" podStartSLOduration=1.170298327 podStartE2EDuration="1.170298327s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.16893474 +0000 UTC m=+6.455433460" watchObservedRunningTime="2024-05-01 04:15:43.170298327 +0000 UTC m=+6.456797147"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444132    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.828896    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444229    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444209637 +0000 UTC m=+7.730708457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.828896    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444591    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.829044    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444633    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444622763 +0000 UTC m=+7.731121483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.829088    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.544921    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829088    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545047    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545141    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.545110913 +0000 UTC m=+7.831609633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.039213    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.378804    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.401946    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402229    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402476    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.403391    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454688    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454983    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.454902809 +0000 UTC m=+9.741401629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455515    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455560    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.45554895 +0000 UTC m=+9.742047670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.829719    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555732    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829719    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555836    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829985    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555920    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.55587479 +0000 UTC m=+9.842373510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830109    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028227    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.830287    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028491    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.830355    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.023829    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.830478    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486637    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486963    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.486942526 +0000 UTC m=+13.773441346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.488686    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.489077    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.488847647 +0000 UTC m=+13.775346467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587833    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587977    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.588185    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.588160623 +0000 UTC m=+13.874659443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.027084    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.028397    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:48 multinode-289800 kubelet[1525]: E0501 04:15:48.022969    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.024347    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.025248    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.024175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523387    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523508    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.523488538 +0000 UTC m=+21.809987358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524104    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.831118    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524150    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.524137716 +0000 UTC m=+21.810636436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.831240    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.624897    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.831329    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625357    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.831440    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625742    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.625719971 +0000 UTC m=+21.912218691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.831506    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024464    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.831614    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024959    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.831682    4352 command_runner.go:130] > May 01 04:15:52 multinode-289800 kubelet[1525]: E0501 04:15:52.024016    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.831788    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.023669    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.831917    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.024381    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.831939    4352 command_runner.go:130] > May 01 04:15:54 multinode-289800 kubelet[1525]: E0501 04:15:54.023529    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.023399    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.024039    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:56 multinode-289800 kubelet[1525]: E0501 04:15:56.023961    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.024583    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.025562    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.024494    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606520    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606584    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.606569125 +0000 UTC m=+37.893067945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607052    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607095    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.607084827 +0000 UTC m=+37.893583547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.707959    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708171    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.832705    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708240    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.708221599 +0000 UTC m=+37.994720419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.832801    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.024158    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832924    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.025055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:00 multinode-289800 kubelet[1525]: E0501 04:16:00.023216    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.024905    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.025585    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:02 multinode-289800 kubelet[1525]: E0501 04:16:02.024143    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.023409    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.024062    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:04 multinode-289800 kubelet[1525]: E0501 04:16:04.023182    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.028055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.029254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:06 multinode-289800 kubelet[1525]: E0501 04:16:06.024522    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.024384    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.025431    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833527    4352 command_runner.go:130] > May 01 04:16:08 multinode-289800 kubelet[1525]: E0501 04:16:08.024168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833605    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.024117    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833678    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.025560    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:10 multinode-289800 kubelet[1525]: E0501 04:16:10.023881    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.023619    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.024277    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:12 multinode-289800 kubelet[1525]: E0501 04:16:12.024236    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023153    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023926    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.023335    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657138    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657461    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.657440103 +0000 UTC m=+69.943938823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657218    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657858    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.65783162 +0000 UTC m=+69.944330440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758303    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758421    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758487    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.758469083 +0000 UTC m=+70.044967903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.834286    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.023369    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.834521    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.024797    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.834598    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.886834    1525 scope.go:117] "RemoveContainer" containerID="ee2238f98e350e8d80528b60fc5b614ce6048d8b34af2034a9947e26d8e6beab"
	I0501 04:16:55.834598    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.887225    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:55.834664    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.887510    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b8d2a827-d9a6-419a-a076-c7695a16a2b5)\"" pod="kube-system/storage-provisioner" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: E0501 04:16:16.024360    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: I0501 04:16:16.618138    1525 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 kubelet[1525]: I0501 04:16:29.024408    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.040204    1525 scope.go:117] "RemoveContainer" containerID="3244d1ee5ab428faf09a962609f2c940c36a998727a01b873d382eb5ee600ca3"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	I0501 04:16:55.892462    4352 logs.go:123] Gathering logs for kube-apiserver [18cd30f3ad28] ...
	I0501 04:16:55.892462    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd30f3ad28"
	I0501 04:16:55.927845    4352 command_runner.go:130] ! I0501 04:15:39.445795       1 options.go:221] external host was not specified, using 172.28.209.199
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:39.453956       1 server.go:148] Version: v1.30.0
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:39.454357       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:40.258184       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:40.258591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:40.260085       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 04:16:55.928802    4352 command_runner.go:130] ! I0501 04:15:40.260405       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:40.261810       1 instance.go:299] Using reconciler: lease
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:40.801281       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0501 04:16:55.928853    4352 command_runner.go:130] ! W0501 04:15:40.801386       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:41.090803       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:41.091252       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0501 04:16:55.929012    4352 command_runner.go:130] ! I0501 04:15:41.359171       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0501 04:16:55.929113    4352 command_runner.go:130] ! I0501 04:15:41.532740       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0501 04:16:55.929153    4352 command_runner.go:130] ! I0501 04:15:41.570911       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0501 04:16:55.929198    4352 command_runner.go:130] ! W0501 04:15:41.571018       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929198    4352 command_runner.go:130] ! W0501 04:15:41.571046       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929360    4352 command_runner.go:130] ! I0501 04:15:41.571875       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0501 04:16:55.929481    4352 command_runner.go:130] ! W0501 04:15:41.572053       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929536    4352 command_runner.go:130] ! I0501 04:15:41.573317       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0501 04:16:55.929536    4352 command_runner.go:130] ! I0501 04:15:41.574692       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.574726       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.574734       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.576633       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.576726       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.577645       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.577739       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.577748       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.578543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.578618       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.578731       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.579623       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.582482       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.582572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.582581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.583284       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.583417       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.583428       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.585084       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.585203       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.588956       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.589055       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.589067       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.589951       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.590056       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.590066       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.930143    4352 command_runner.go:130] ! I0501 04:15:41.593577       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0501 04:16:55.930143    4352 command_runner.go:130] ! W0501 04:15:41.593674       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930312    4352 command_runner.go:130] ! W0501 04:15:41.593684       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.930389    4352 command_runner.go:130] ! I0501 04:15:41.595694       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0501 04:16:55.930389    4352 command_runner.go:130] ! I0501 04:15:41.597680       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0501 04:16:55.930509    4352 command_runner.go:130] ! W0501 04:15:41.597864       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0501 04:16:55.930570    4352 command_runner.go:130] ! W0501 04:15:41.597875       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930570    4352 command_runner.go:130] ! I0501 04:15:41.603955       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0501 04:16:55.930644    4352 command_runner.go:130] ! W0501 04:15:41.604059       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0501 04:16:55.930644    4352 command_runner.go:130] ! W0501 04:15:41.604069       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0501 04:16:55.930709    4352 command_runner.go:130] ! I0501 04:15:41.607445       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0501 04:16:55.930709    4352 command_runner.go:130] ! W0501 04:15:41.607533       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930786    4352 command_runner.go:130] ! W0501 04:15:41.607543       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.930786    4352 command_runner.go:130] ! I0501 04:15:41.608797       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0501 04:16:55.930851    4352 command_runner.go:130] ! W0501 04:15:41.608817       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930911    4352 command_runner.go:130] ! I0501 04:15:41.625599       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0501 04:16:55.930911    4352 command_runner.go:130] ! W0501 04:15:41.625618       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930991    4352 command_runner.go:130] ! I0501 04:15:42.332139       1 secure_serving.go:213] Serving securely on [::]:8443
	I0501 04:16:55.930991    4352 command_runner.go:130] ! I0501 04:15:42.332337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:55.931053    4352 command_runner.go:130] ! I0501 04:15:42.332595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.931241    4352 command_runner.go:130] ! I0501 04:15:42.333006       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0501 04:16:55.931293    4352 command_runner.go:130] ! I0501 04:15:42.333577       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 04:16:55.931361    4352 command_runner.go:130] ! I0501 04:15:42.333909       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.931361    4352 command_runner.go:130] ! I0501 04:15:42.334990       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 04:16:55.931361    4352 command_runner.go:130] ! I0501 04:15:42.335027       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 04:16:55.931429    4352 command_runner.go:130] ! I0501 04:15:42.335107       1 aggregator.go:163] waiting for initial CRD sync...
	I0501 04:16:55.931429    4352 command_runner.go:130] ! I0501 04:15:42.335378       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0501 04:16:55.931513    4352 command_runner.go:130] ! I0501 04:15:42.335424       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0501 04:16:55.931513    4352 command_runner.go:130] ! I0501 04:15:42.335517       1 available_controller.go:423] Starting AvailableConditionController
	I0501 04:16:55.931576    4352 command_runner.go:130] ! I0501 04:15:42.335533       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 04:16:55.931576    4352 command_runner.go:130] ! I0501 04:15:42.335556       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0501 04:16:55.931640    4352 command_runner.go:130] ! I0501 04:15:42.337835       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0501 04:16:55.931640    4352 command_runner.go:130] ! I0501 04:15:42.338196       1 controller.go:116] Starting legacy_token_tracking_controller
	I0501 04:16:55.931702    4352 command_runner.go:130] ! I0501 04:15:42.338360       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0501 04:16:55.931757    4352 command_runner.go:130] ! I0501 04:15:42.338519       1 controller.go:78] Starting OpenAPI AggregationController
	I0501 04:16:55.931757    4352 command_runner.go:130] ! I0501 04:15:42.339167       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0501 04:16:55.931819    4352 command_runner.go:130] ! I0501 04:15:42.339360       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0501 04:16:55.931819    4352 command_runner.go:130] ! I0501 04:15:42.339853       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 04:16:55.931875    4352 command_runner.go:130] ! I0501 04:15:42.361139       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 04:16:55.931938    4352 command_runner.go:130] ! I0501 04:15:42.361155       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 04:16:55.931938    4352 command_runner.go:130] ! I0501 04:15:42.361192       1 controller.go:139] Starting OpenAPI controller
	I0501 04:16:55.931994    4352 command_runner.go:130] ! I0501 04:15:42.361219       1 controller.go:87] Starting OpenAPI V3 controller
	I0501 04:16:55.931994    4352 command_runner.go:130] ! I0501 04:15:42.361233       1 naming_controller.go:291] Starting NamingConditionController
	I0501 04:16:55.931994    4352 command_runner.go:130] ! I0501 04:15:42.361253       1 establishing_controller.go:76] Starting EstablishingController
	I0501 04:16:55.932081    4352 command_runner.go:130] ! I0501 04:15:42.361274       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 04:16:55.932139    4352 command_runner.go:130] ! I0501 04:15:42.361288       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 04:16:55.932139    4352 command_runner.go:130] ! I0501 04:15:42.361301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 04:16:55.932203    4352 command_runner.go:130] ! I0501 04:15:42.395816       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.932203    4352 command_runner.go:130] ! I0501 04:15:42.396242       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:55.932203    4352 command_runner.go:130] ! I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:16:55.932270    4352 command_runner.go:130] ! I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:16:55.932270    4352 command_runner.go:130] ! I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:16:55.932335    4352 command_runner.go:130] ! I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:16:55.932392    4352 command_runner.go:130] ! I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:16:55.932392    4352 command_runner.go:130] ! I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:16:55.932392    4352 command_runner.go:130] ! I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:16:55.932455    4352 command_runner.go:130] ! I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:16:55.932512    4352 command_runner.go:130] ! I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:16:55.932512    4352 command_runner.go:130] ! I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:16:55.932512    4352 command_runner.go:130] ! I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:16:55.932576    4352 command_runner.go:130] ! I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:16:55.932576    4352 command_runner.go:130] ! I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:16:55.932640    4352 command_runner.go:130] ! I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:55.932701    4352 command_runner.go:130] ! I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:16:55.932701    4352 command_runner.go:130] ! I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:16:55.932772    4352 command_runner.go:130] ! I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 04:16:55.932772    4352 command_runner.go:130] ! W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:16:55.932837    4352 command_runner.go:130] ! I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:16:55.932837    4352 command_runner.go:130] ! I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:16:55.932893    4352 command_runner.go:130] ! I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:16:55.932969    4352 command_runner.go:130] ! I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:16:55.932969    4352 command_runner.go:130] ! I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:16:55.933024    4352 command_runner.go:130] ! I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:16:55.933024    4352 command_runner.go:130] ! I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 04:16:55.942402    4352 logs.go:123] Gathering logs for etcd [34892fdb6898] ...
	I0501 04:16:55.942402    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34892fdb6898"
	I0501 04:16:55.972277    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.997417Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:55.972776    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998475Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.209.199:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.209.199:2380","--initial-cluster=multinode-289800=https://172.28.209.199:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.209.199:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.209.199:2380","--name=multinode-289800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0501 04:16:55.973134    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998558Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.998588Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998599Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.209.199:2380"]}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998626Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.006405Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"]}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.007658Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-289800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.030589Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.951987ms"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.081537Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","commit-index":2020}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=()"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became follower at term 2"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fe483b81e7b7d166 [peers: [], term: 2, commit: 2020, applied: 0, lastindex: 2020, lastterm: 2]"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:39.121672Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.127575Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1352}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.132217Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1744}
	I0501 04:16:55.973777    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.144206Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0501 04:16:55.973777    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.15993Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"fe483b81e7b7d166","timeout":"7s"}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"fe483b81e7b7d166"}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160545Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fe483b81e7b7d166","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.16402Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.165851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0501 04:16:55.973956    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0501 04:16:55.973998    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0501 04:16:55.973998    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	I0501 04:16:55.974052    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	I0501 04:16:55.974094    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	I0501 04:16:55.974094    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0501 04:16:55.974139    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:55.974238    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0501 04:16:55.974238    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0501 04:16:55.974291    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	I0501 04:16:55.974291    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	I0501 04:16:55.974332    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	I0501 04:16:55.974332    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	I0501 04:16:55.974369    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	I0501 04:16:55.974419    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	I0501 04:16:55.974419    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	I0501 04:16:55.974456    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	I0501 04:16:55.974456    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	I0501 04:16:55.974505    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	I0501 04:16:55.974543    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:55.974543    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:55.974543    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	I0501 04:16:55.974584    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0501 04:16:55.974584    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0501 04:16:55.974622    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0501 04:16:55.982199    4352 logs.go:123] Gathering logs for kindnet [6d5f881ef398] ...
	I0501 04:16:55.982199    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d5f881ef398"
	I0501 04:16:56.024418    4352 command_runner.go:130] ! I0501 04:01:59.122485       1 main.go:227] handling current node
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122501       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122510       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122690       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122722       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153658       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153775       1 main.go:227] handling current node
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153793       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153946       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:09.153980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161031       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161061       1 main.go:227] handling current node
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161079       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161177       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:19.161185       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181653       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181721       1 main.go:227] handling current node
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181735       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181742       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:29.182277       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:29.182369       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.195902       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196079       1 main.go:227] handling current node
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196095       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196105       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196558       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196649       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:49.209858       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:49.209973       1 main.go:227] handling current node
	I0501 04:16:56.026422    4352 command_runner.go:130] ! I0501 04:02:49.210027       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026422    4352 command_runner.go:130] ! I0501 04:02:49.210041       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.026512    4352 command_runner.go:130] ! I0501 04:02:49.210461       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.026512    4352 command_runner.go:130] ! I0501 04:02:49.210617       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.026512    4352 command_runner.go:130] ! I0501 04:02:59.219550       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.026553    4352 command_runner.go:130] ! I0501 04:02:59.219615       1 main.go:227] handling current node
	I0501 04:16:56.026553    4352 command_runner.go:130] ! I0501 04:02:59.219631       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026553    4352 command_runner.go:130] ! I0501 04:02:59.219638       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:02:59.220333       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:02:59.220436       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.231302       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.232437       1 main.go:227] handling current node
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.232648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.232851       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.233578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.233631       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.245975       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.246060       1 main.go:227] handling current node
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.246073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.246081       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:19.246386       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:19.246423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.258941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259020       1 main.go:227] handling current node
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259485       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259520       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.269941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270129       1 main.go:227] handling current node
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270152       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270161       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270403       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270438       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.282880       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283025       1 main.go:227] handling current node
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283045       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283054       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283773       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283792       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:59.297110       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297155       1 main.go:227] handling current node
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297169       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297177       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297656       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297688       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:04:09.310638       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:04:09.311476       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.311969       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.312340       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.313291       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.313332       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.324939       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325084       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325480       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325493       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325923       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.326083       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332468       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332576       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332619       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332645       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332818       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332831       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342867       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342901       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342921       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.343433       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.343593       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364771       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364905       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364921       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364930       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.365166       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.365205       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379243       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379352       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379369       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379377       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379531       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379564       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.389743       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390518       1 main.go:227] handling current node
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390622       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390636       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390894       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.391049       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:19.400837       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:19.401285       1 main.go:227] handling current node
	I0501 04:16:56.029569    4352 command_runner.go:130] ! I0501 04:05:19.401439       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029594    4352 command_runner.go:130] ! I0501 04:05:19.401572       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.029594    4352 command_runner.go:130] ! I0501 04:05:19.401956       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:19.402136       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422040       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422249       1 main.go:227] handling current node
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422285       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422311       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422521       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.029827    4352 command_runner.go:130] ! I0501 04:05:29.422723       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029849    4352 command_runner.go:130] ! I0501 04:05:39.429807       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029941    4352 command_runner.go:130] ! I0501 04:05:39.429856       1 main.go:227] handling current node
	I0501 04:16:56.029996    4352 command_runner.go:130] ! I0501 04:05:39.429874       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029996    4352 command_runner.go:130] ! I0501 04:05:39.429881       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030215    4352 command_runner.go:130] ! I0501 04:05:39.430903       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030215    4352 command_runner.go:130] ! I0501 04:05:39.431340       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030318    4352 command_runner.go:130] ! I0501 04:05:49.445455       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030318    4352 command_runner.go:130] ! I0501 04:05:49.445594       1 main.go:227] handling current node
	I0501 04:16:56.030365    4352 command_runner.go:130] ! I0501 04:05:49.445610       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:49.445619       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:49.445751       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:49.445765       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:59.461135       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:59.461248       1 main.go:227] handling current node
	I0501 04:16:56.030544    4352 command_runner.go:130] ! I0501 04:05:59.461264       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030544    4352 command_runner.go:130] ! I0501 04:05:59.461273       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030544    4352 command_runner.go:130] ! I0501 04:05:59.461947       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030614    4352 command_runner.go:130] ! I0501 04:05:59.462094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030614    4352 command_runner.go:130] ! I0501 04:06:09.469509       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030640    4352 command_runner.go:130] ! I0501 04:06:09.469615       1 main.go:227] handling current node
	I0501 04:16:56.030682    4352 command_runner.go:130] ! I0501 04:06:09.469636       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030682    4352 command_runner.go:130] ! I0501 04:06:09.469646       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030682    4352 command_runner.go:130] ! I0501 04:06:09.470218       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:09.470387       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486501       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486605       1 main.go:227] handling current node
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486621       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486629       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:19.486864       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:19.486946       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:29.503311       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:29.503476       1 main.go:227] handling current node
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:29.503492       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:29.503503       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:29.503633       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:29.503843       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:39.528749       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:39.528837       1 main.go:227] handling current node
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:39.528853       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:39.528861       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:39.529235       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:39.529373       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:49.535984       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:49.536067       1 main.go:227] handling current node
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536082       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536092       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536689       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536802       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:59.550480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:59.551072       1 main.go:227] handling current node
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551257       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551358       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551696       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551781       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:07:09.569460       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031187    4352 command_runner.go:130] ! I0501 04:07:09.569627       1 main.go:227] handling current node
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.569642       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.569651       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.570296       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.570434       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:19.577507       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031330    4352 command_runner.go:130] ! I0501 04:07:19.577599       1 main.go:227] handling current node
	I0501 04:16:56.031330    4352 command_runner.go:130] ! I0501 04:07:19.577615       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031330    4352 command_runner.go:130] ! I0501 04:07:19.577730       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031373    4352 command_runner.go:130] ! I0501 04:07:19.578102       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031373    4352 command_runner.go:130] ! I0501 04:07:19.578208       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592703       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592845       1 main.go:227] handling current node
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592861       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592869       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.593139       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.593174       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031640    4352 command_runner.go:130] ! I0501 04:07:39.602034       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031800    4352 command_runner.go:130] ! I0501 04:07:39.602064       1 main.go:227] handling current node
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602077       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602084       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602283       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602300       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:49.837563       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032093    4352 command_runner.go:130] ! I0501 04:07:49.837638       1 main.go:227] handling current node
	I0501 04:16:56.032179    4352 command_runner.go:130] ! I0501 04:07:49.837652       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032179    4352 command_runner.go:130] ! I0501 04:07:49.837660       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:49.837875       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:49.837955       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:59.851818       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:59.852109       1 main.go:227] handling current node
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:59.852127       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:07:59.852753       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:07:59.853129       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:07:59.853164       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:08:09.860338       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:08:09.860453       1 main.go:227] handling current node
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.860472       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.860482       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.860626       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.861316       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:19.877403       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877515       1 main.go:227] handling current node
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877530       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877838       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877874       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892899       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892926       1 main.go:227] handling current node
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892937       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892944       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.893106       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:29.893180       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:39.901877       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:39.901929       1 main.go:227] handling current node
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:39.901943       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:39.901951       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:39.902578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:39.902678       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:49.918941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:49.919115       1 main.go:227] handling current node
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919130       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919139       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919950       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919968       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933101       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933154       1 main.go:227] handling current node
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933667       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:08:59.934094       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:08:59.934127       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:09:09.948569       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:09:09.948615       1 main.go:227] handling current node
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.948629       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.948637       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.949057       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.949076       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:19.958099       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033096    4352 command_runner.go:130] ! I0501 04:09:19.958261       1 main.go:227] handling current node
	I0501 04:16:56.033096    4352 command_runner.go:130] ! I0501 04:09:19.958282       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033096    4352 command_runner.go:130] ! I0501 04:09:19.958294       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:19.958880       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:19.959055       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:29.975626       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:29.975765       1 main.go:227] handling current node
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.975790       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.975803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.976360       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.976488       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:39.985296       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.985455       1 main.go:227] handling current node
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.985488       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.985497       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.986552       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.986590       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.995944       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996021       1 main.go:227] handling current node
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996649       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996720       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003190       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003239       1 main.go:227] handling current node
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003261       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003479       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:00.003516       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023328       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023430       1 main.go:227] handling current node
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023445       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023460       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:10.023613       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:10.023647       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030526       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030616       1 main.go:227] handling current node
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030632       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030641       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030856       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038164       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038263       1 main.go:227] handling current node
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038278       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038287       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038931       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:30.039072       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:40.053866       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:40.053915       1 main.go:227] handling current node
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:40.053929       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:40.053936       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:40.054259       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:40.054295       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066490       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066542       1 main.go:227] handling current node
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066560       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066567       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.067066       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.067210       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:11:00.075901       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:11:00.076052       1 main.go:227] handling current node
	I0501 04:16:56.033914    4352 command_runner.go:130] ! I0501 04:11:00.076069       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033914    4352 command_runner.go:130] ! I0501 04:11:00.076078       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033956    4352 command_runner.go:130] ! I0501 04:11:10.087907       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033956    4352 command_runner.go:130] ! I0501 04:11:10.088124       1 main.go:227] handling current node
	I0501 04:16:56.033956    4352 command_runner.go:130] ! I0501 04:11:10.088140       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033997    4352 command_runner.go:130] ! I0501 04:11:10.088148       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033997    4352 command_runner.go:130] ! I0501 04:11:10.088875       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.033997    4352 command_runner.go:130] ! I0501 04:11:10.088954       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:10.089178       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103399       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103511       1 main.go:227] handling current node
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103528       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103879       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103916       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:30.114473       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:30.115083       1 main.go:227] handling current node
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.115256       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.115463       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.116474       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.116611       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:40.124324       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124371       1 main.go:227] handling current node
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124384       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124392       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124558       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124570       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034317    4352 command_runner.go:130] ! I0501 04:11:50.138059       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034317    4352 command_runner.go:130] ! I0501 04:11:50.138102       1 main.go:227] handling current node
	I0501 04:16:56.034317    4352 command_runner.go:130] ! I0501 04:11:50.138116       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:11:50.138123       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:11:50.138826       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:11:50.138936       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:12:00.155704       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034445    4352 command_runner.go:130] ! I0501 04:12:00.155799       1 main.go:227] handling current node
	I0501 04:16:56.034445    4352 command_runner.go:130] ! I0501 04:12:00.155823       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034445    4352 command_runner.go:130] ! I0501 04:12:00.155832       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:00.156502       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:00.156549       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164706       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164754       1 main.go:227] handling current node
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164767       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164774       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034564    4352 command_runner.go:130] ! I0501 04:12:10.164887       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034564    4352 command_runner.go:130] ! I0501 04:12:10.165094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034564    4352 command_runner.go:130] ! I0501 04:12:20.178957       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034625    4352 command_runner.go:130] ! I0501 04:12:20.179142       1 main.go:227] handling current node
	I0501 04:16:56.034625    4352 command_runner.go:130] ! I0501 04:12:20.179159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034625    4352 command_runner.go:130] ! I0501 04:12:20.179178       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034714    4352 command_runner.go:130] ! I0501 04:12:20.179694       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034762    4352 command_runner.go:130] ! I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034762    4352 command_runner.go:130] ! I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034762    4352 command_runner.go:130] ! I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:16:56.034804    4352 command_runner.go:130] ! I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034804    4352 command_runner.go:130] ! I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034804    4352 command_runner.go:130] ! I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034858    4352 command_runner.go:130] ! I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034858    4352 command_runner.go:130] ! I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034858    4352 command_runner.go:130] ! I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.035005    4352 command_runner.go:130] ! I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.035005    4352 command_runner.go:130] ! I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.035005    4352 command_runner.go:130] ! I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.057018    4352 logs.go:123] Gathering logs for describe nodes ...
	I0501 04:16:56.057018    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 04:16:56.308494    4352 command_runner.go:130] > Name:               multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] > Roles:              control-plane
	I0501 04:16:56.308494    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0501 04:16:56.308494    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:56.308494    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	I0501 04:16:56.308494    4352 command_runner.go:130] > Taints:             <none>
	I0501 04:16:56.308494    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:56.308494    4352 command_runner.go:130] > Lease:
	I0501 04:16:56.308494    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:56.308494    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:16:53 +0000
	I0501 04:16:56.308494    4352 command_runner.go:130] > Conditions:
	I0501 04:16:56.308494    4352 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0501 04:16:56.308494    4352 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0501 04:16:56.308494    4352 command_runner.go:130] >   MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0501 04:16:56.308494    4352 command_runner.go:130] >   DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0501 04:16:56.308494    4352 command_runner.go:130] >   PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0501 04:16:56.308494    4352 command_runner.go:130] >   Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	I0501 04:16:56.308494    4352 command_runner.go:130] > Addresses:
	I0501 04:16:56.308494    4352 command_runner.go:130] >   InternalIP:  172.28.209.199
	I0501 04:16:56.308494    4352 command_runner.go:130] >   Hostname:    multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] > Capacity:
	I0501 04:16:56.309052    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.309052    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.309052    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.309100    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.309100    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.309100    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:56.309100    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.309100    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.309158    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.309158    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.309158    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.309158    4352 command_runner.go:130] > System Info:
	I0501 04:16:56.309200    4352 command_runner.go:130] >   Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	I0501 04:16:56.309230    4352 command_runner.go:130] >   System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	I0501 04:16:56.309245    4352 command_runner.go:130] >   Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	I0501 04:16:56.309245    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:56.309271    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:56.309271    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:56.309347    4352 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0501 04:16:56.309347    4352 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0501 04:16:56.309347    4352 command_runner.go:130] > Non-terminated Pods:          (10 in total)
	I0501 04:16:56.309403    4352 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:56.309403    4352 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0501 04:16:56.309445    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-cc6mk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0501 04:16:56.309445    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0501 04:16:56.309484    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0501 04:16:56.309525    4352 command_runner.go:130] >   kube-system                 etcd-multinode-289800                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         74s
	I0501 04:16:56.309525    4352 command_runner.go:130] >   kube-system                 kindnet-vcxkr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0501 04:16:56.309564    4352 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-289800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         74s
	I0501 04:16:56.309605    4352 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-289800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:56.309605    4352 command_runner.go:130] >   kube-system                 kube-proxy-bp9zx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:56.309643    4352 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-289800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:56.309643    4352 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:56.309643    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:56.309686    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:56.309686    4352 command_runner.go:130] >   Resource           Requests     Limits
	I0501 04:16:56.309686    4352 command_runner.go:130] >   --------           --------     ------
	I0501 04:16:56.309725    4352 command_runner.go:130] >   cpu                950m (47%)   100m (5%)
	I0501 04:16:56.309725    4352 command_runner.go:130] >   memory             290Mi (13%)  390Mi (18%)
	I0501 04:16:56.309725    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0501 04:16:56.309725    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0501 04:16:56.309725    4352 command_runner.go:130] > Events:
	I0501 04:16:56.309775    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:56.309775    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:56.309815    4352 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0501 04:16:56.309815    4352 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0501 04:16:56.309815    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:56.309856    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:56.309856    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.309895    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:56.309895    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.309936    4352 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0501 04:16:56.309936    4352 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:56.309974    4352 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-289800 status is now: NodeReady
	I0501 04:16:56.309974    4352 command_runner.go:130] >   Normal  Starting                 80s                kubelet          Starting kubelet.
	I0501 04:16:56.309974    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  79s (x8 over 80s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:56.310024    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    79s (x8 over 80s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.310083    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 80s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:56.310083    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.310083    4352 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:56.310127    4352 command_runner.go:130] > Name:               multinode-289800-m02
	I0501 04:16:56.310127    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:56.310127    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:56.310127    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:56.310168    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:56.310168    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m02
	I0501 04:16:56.310212    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:56.310212    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:56.310253    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:56.310253    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:56.310253    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	I0501 04:16:56.310297    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:56.310297    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:56.310339    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:56.310339    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:56.310339    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	I0501 04:16:56.310396    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:56.310396    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:56.310437    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:56.310437    4352 command_runner.go:130] > Lease:
	I0501 04:16:56.310437    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m02
	I0501 04:16:56.310480    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:56.310480    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	I0501 04:16:56.310480    4352 command_runner.go:130] > Conditions:
	I0501 04:16:56.310480    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:56.310520    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:56.310520    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310571    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310604    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310604    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310648    4352 command_runner.go:130] > Addresses:
	I0501 04:16:56.310648    4352 command_runner.go:130] >   InternalIP:  172.28.219.162
	I0501 04:16:56.310648    4352 command_runner.go:130] >   Hostname:    multinode-289800-m02
	I0501 04:16:56.310648    4352 command_runner.go:130] > Capacity:
	I0501 04:16:56.310688    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.310688    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.310688    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.310688    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.310688    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.310740    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:56.310740    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.310740    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.310788    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.310788    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.310816    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.310816    4352 command_runner.go:130] > System Info:
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	I0501 04:16:56.310816    4352 command_runner.go:130] >   System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:56.310816    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:56.310816    4352 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0501 04:16:56.310816    4352 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0501 04:16:56.310816    4352 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:56.310816    4352 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0501 04:16:56.310816    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-tbxxx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0501 04:16:56.310816    4352 command_runner.go:130] >   kube-system                 kindnet-gzz7p              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0501 04:16:56.310816    4352 command_runner.go:130] >   kube-system                 kube-proxy-rlzp8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0501 04:16:56.310816    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:56.310816    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:56.310816    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:56.310816    4352 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0501 04:16:56.310816    4352 command_runner.go:130] > Events:
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:56.310816    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeNotReady             21s                node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	I0501 04:16:56.310816    4352 command_runner.go:130] > Name:               multinode-289800-m03
	I0501 04:16:56.310816    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:56.310816    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m03
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:56.311393    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:56.311393    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	I0501 04:16:56.311393    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:56.311443    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:56.311443    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:56.311443    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:56.311482    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	I0501 04:16:56.311482    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:56.311482    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:56.311515    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:56.311515    4352 command_runner.go:130] > Lease:
	I0501 04:16:56.311515    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m03
	I0501 04:16:56.311515    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:56.311568    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	I0501 04:16:56.311568    4352 command_runner.go:130] > Conditions:
	I0501 04:16:56.311568    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:56.311610    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] > Addresses:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   InternalIP:  172.28.223.145
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Hostname:    multinode-289800-m03
	I0501 04:16:56.311610    4352 command_runner.go:130] > Capacity:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.311610    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.311610    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.311610    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.311610    4352 command_runner.go:130] > System Info:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	I0501 04:16:56.311610    4352 command_runner.go:130] >   System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:56.311610    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:56.311610    4352 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0501 04:16:56.311610    4352 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0501 04:16:56.311610    4352 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0501 04:16:56.312200    4352 command_runner.go:130] >   kube-system                 kindnet-4m5vg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0501 04:16:56.312200    4352 command_runner.go:130] >   kube-system                 kube-proxy-g8mbm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0501 04:16:56.312251    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:56.312251    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:56.312251    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:56.312251    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:56.312251    4352 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0501 04:16:56.312251    4352 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0501 04:16:56.312251    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0501 04:16:56.312329    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0501 04:16:56.312329    4352 command_runner.go:130] > Events:
	I0501 04:16:56.312366    4352 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0501 04:16:56.312389    4352 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  Starting                 5m48s                  kube-proxy       
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  RegisteredNode           5m47s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeReady                5m45s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeNotReady             4m7s                   node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  RegisteredNode           61s                    node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:56.322942    4352 logs.go:123] Gathering logs for coredns [b8a9b405d76b] ...
	I0501 04:16:56.322942    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a9b405d76b"
	I0501 04:16:56.376703    4352 command_runner.go:130] > .:53
	I0501 04:16:56.376766    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:56.376766    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:56.376766    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:56.376828    4352 command_runner.go:130] > [INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	I0501 04:16:56.377071    4352 logs.go:123] Gathering logs for coredns [8a0208aeafcf] ...
	I0501 04:16:56.377165    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0208aeafcf"
	I0501 04:16:56.416710    4352 command_runner.go:130] > .:53
	I0501 04:16:56.416754    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:56.416754    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:56.416754    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:56.416754    4352 command_runner.go:130] > [INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	I0501 04:16:56.417168    4352 logs.go:123] Gathering logs for kube-controller-manager [66a1b89e6733] ...
	I0501 04:16:56.417351    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1b89e6733"
	I0501 04:16:56.455218    4352 command_runner.go:130] ! I0501 04:15:39.740014       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:56.455874    4352 command_runner.go:130] ! I0501 04:15:40.254324       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:56.455874    4352 command_runner.go:130] ! I0501 04:15:40.254368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.263842       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.264273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.265102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.265435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:56.456134    4352 command_runner.go:130] ! I0501 04:15:44.420436       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:56.456134    4352 command_runner.go:130] ! I0501 04:15:44.421597       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:56.456196    4352 command_runner.go:130] ! I0501 04:15:44.430683       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:56.456196    4352 command_runner.go:130] ! I0501 04:15:44.430949       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.431056       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.437281       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.440408       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.437711       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:56.456486    4352 command_runner.go:130] ! I0501 04:15:44.440933       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:56.456547    4352 command_runner.go:130] ! I0501 04:15:44.450877       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:56.456547    4352 command_runner.go:130] ! I0501 04:15:44.452935       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:56.456642    4352 command_runner.go:130] ! I0501 04:15:44.452958       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.458231       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.458525       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.458548       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.467611       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.468036       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.468093       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.468107       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.484825       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.484856       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:56.457012    4352 command_runner.go:130] ! I0501 04:15:44.484892       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457012    4352 command_runner.go:130] ! I0501 04:15:44.485128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:56.457012    4352 command_runner.go:130] ! I0501 04:15:44.485186       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:56.457134    4352 command_runner.go:130] ! I0501 04:15:44.485221       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:56.457134    4352 command_runner.go:130] ! I0501 04:15:44.485229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:56.457250    4352 command_runner.go:130] ! I0501 04:15:44.485246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457250    4352 command_runner.go:130] ! I0501 04:15:44.485322       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457250    4352 command_runner.go:130] ! I0501 04:15:44.488601       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:56.457369    4352 command_runner.go:130] ! I0501 04:15:44.488943       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:56.457439    4352 command_runner.go:130] ! I0501 04:15:44.488958       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:56.457477    4352 command_runner.go:130] ! I0501 04:15:44.488985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457571    4352 command_runner.go:130] ! I0501 04:15:44.523143       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:56.457611    4352 command_runner.go:130] ! I0501 04:15:44.644894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:56.457753    4352 command_runner.go:130] ! I0501 04:15:44.645016       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:56.457753    4352 command_runner.go:130] ! I0501 04:15:44.645088       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:56.457854    4352 command_runner.go:130] ! I0501 04:15:44.645112       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:56.457854    4352 command_runner.go:130] ! I0501 04:15:44.646888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:56.457915    4352 command_runner.go:130] ! W0501 04:15:44.646984       1 shared_informer.go:597] resyncPeriod 15h44m19.234758052s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:56.458000    4352 command_runner.go:130] ! W0501 04:15:44.647035       1 shared_informer.go:597] resyncPeriod 17h52m42.538614251s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:56.458059    4352 command_runner.go:130] ! I0501 04:15:44.647224       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:56.458059    4352 command_runner.go:130] ! I0501 04:15:44.647325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:56.458132    4352 command_runner.go:130] ! I0501 04:15:44.647389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:56.458211    4352 command_runner.go:130] ! I0501 04:15:44.647418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:56.458211    4352 command_runner.go:130] ! I0501 04:15:44.647559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:56.458312    4352 command_runner.go:130] ! I0501 04:15:44.647580       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:56.458312    4352 command_runner.go:130] ! I0501 04:15:44.648269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:56.458449    4352 command_runner.go:130] ! I0501 04:15:44.648364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:56.458449    4352 command_runner.go:130] ! I0501 04:15:44.648387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:56.458584    4352 command_runner.go:130] ! I0501 04:15:44.648418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:56.458674    4352 command_runner.go:130] ! I0501 04:15:44.648519       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:56.458712    4352 command_runner.go:130] ! I0501 04:15:44.648561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:56.458712    4352 command_runner.go:130] ! I0501 04:15:44.648582       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:56.458712    4352 command_runner.go:130] ! I0501 04:15:44.648601       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.648633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.648662       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.649971       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.649999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.650094       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.658545       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.664070       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.664109       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:56.459072    4352 command_runner.go:130] ! I0501 04:15:44.672333       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:56.459072    4352 command_runner.go:130] ! I0501 04:15:44.672648       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:56.459072    4352 command_runner.go:130] ! I0501 04:15:44.673224       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:56.459072    4352 command_runner.go:130] ! E0501 04:15:44.680086       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:56.459232    4352 command_runner.go:130] ! I0501 04:15:44.680207       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:56.459232    4352 command_runner.go:130] ! I0501 04:15:44.686271       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:56.459232    4352 command_runner.go:130] ! I0501 04:15:44.687804       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.688087       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.691064       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.694139       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.694154       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:56.459496    4352 command_runner.go:130] ! I0501 04:15:44.697309       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:56.459496    4352 command_runner.go:130] ! I0501 04:15:44.697808       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:56.459496    4352 command_runner.go:130] ! I0501 04:15:44.698725       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:56.459609    4352 command_runner.go:130] ! I0501 04:15:44.709020       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:56.459609    4352 command_runner.go:130] ! I0501 04:15:44.709557       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:56.459609    4352 command_runner.go:130] ! I0501 04:15:44.718572       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:56.459724    4352 command_runner.go:130] ! I0501 04:15:44.718866       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:56.459724    4352 command_runner.go:130] ! I0501 04:15:44.731386       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:56.459724    4352 command_runner.go:130] ! I0501 04:15:44.731502       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:56.459830    4352 command_runner.go:130] ! I0501 04:15:44.731520       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:56.459830    4352 command_runner.go:130] ! I0501 04:15:44.731794       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.732008       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.732024       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.732060       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.739601       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.741937       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:56.460043    4352 command_runner.go:130] ! I0501 04:15:44.742091       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:56.460043    4352 command_runner.go:130] ! I0501 04:15:44.751335       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:56.460043    4352 command_runner.go:130] ! I0501 04:15:44.758177       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.767021       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.776399       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.777830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.780031       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:56.460285    4352 command_runner.go:130] ! I0501 04:15:44.783346       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:56.460285    4352 command_runner.go:130] ! I0501 04:15:44.784386       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:56.460285    4352 command_runner.go:130] ! I0501 04:15:44.784668       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.790586       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.791028       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.791148       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.795072       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:56.460523    4352 command_runner.go:130] ! I0501 04:15:44.795486       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:56.460523    4352 command_runner.go:130] ! I0501 04:15:44.796321       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:56.460523    4352 command_runner.go:130] ! I0501 04:15:44.806964       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:56.460631    4352 command_runner.go:130] ! I0501 04:15:44.807399       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:56.460631    4352 command_runner.go:130] ! I0501 04:15:44.808302       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:56.460631    4352 command_runner.go:130] ! I0501 04:15:44.810677       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:56.460742    4352 command_runner.go:130] ! I0501 04:15:44.811276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:56.460742    4352 command_runner.go:130] ! I0501 04:15:44.812128       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:56.460742    4352 command_runner.go:130] ! I0501 04:15:44.814338       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:56.460856    4352 command_runner.go:130] ! I0501 04:15:44.814699       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:56.460856    4352 command_runner.go:130] ! I0501 04:15:44.815465       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:56.460856    4352 command_runner.go:130] ! I0501 04:15:44.818437       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:56.460969    4352 command_runner.go:130] ! I0501 04:15:44.819004       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:56.460969    4352 command_runner.go:130] ! I0501 04:15:44.818976       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:56.460969    4352 command_runner.go:130] ! I0501 04:15:44.820305       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:56.461073    4352 command_runner.go:130] ! I0501 04:15:44.820518       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:56.461073    4352 command_runner.go:130] ! I0501 04:15:44.822359       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:56.461073    4352 command_runner.go:130] ! I0501 04:15:44.824878       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.825167       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.835687       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.835705       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.835739       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:56.461300    4352 command_runner.go:130] ! I0501 04:15:44.836623       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:56.461300    4352 command_runner.go:130] ! E0501 04:15:44.845522       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:56.461300    4352 command_runner.go:130] ! I0501 04:15:44.845590       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:56.461420    4352 command_runner.go:130] ! I0501 04:15:44.975590       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:56.461420    4352 command_runner.go:130] ! I0501 04:15:44.975737       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:56.461420    4352 command_runner.go:130] ! I0501 04:15:45.026863       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:56.461524    4352 command_runner.go:130] ! I0501 04:15:45.026966       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:56.461524    4352 command_runner.go:130] ! I0501 04:15:45.026980       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:56.461524    4352 command_runner.go:130] ! I0501 04:15:45.188029       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.191154       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.191606       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.234916       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.235592       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.235855       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.275946       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:56.462566    4352 command_runner.go:130] ! I0501 04:15:45.276219       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:56.462641    4352 command_runner.go:130] ! I0501 04:15:45.277151       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:56.462672    4352 command_runner.go:130] ! I0501 04:15:45.277668       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:56.462723    4352 command_runner.go:130] ! I0501 04:15:55.347039       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:56.462798    4352 command_runner.go:130] ! I0501 04:15:55.347226       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:56.462798    4352 command_runner.go:130] ! I0501 04:15:55.347657       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:56.462838    4352 command_runner.go:130] ! I0501 04:15:55.347697       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:56.462838    4352 command_runner.go:130] ! I0501 04:15:55.351170       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:56.462934    4352 command_runner.go:130] ! I0501 04:15:55.351453       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:56.463169    4352 command_runner.go:130] ! I0501 04:15:55.351701       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:56.463230    4352 command_runner.go:130] ! I0501 04:15:55.352658       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:56.463230    4352 command_runner.go:130] ! I0501 04:15:55.355868       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:56.463230    4352 command_runner.go:130] ! I0501 04:15:55.356195       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.356581       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.373530       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.375966       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.376087       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.376099       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.381581       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.387752       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.398512       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.398855       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.433745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.433841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.434861       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.437855       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438314       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438445       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.441880       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.442281       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448289       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448378       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448532       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448615       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.452662       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.453060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.453136       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.459094       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.465378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.468998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.476103       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.479405       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.480400       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.485347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.485423       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.485459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.488987       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.489270       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.492066       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.492447       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.494972       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.497059       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.499153       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.499594       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:56.464553    4352 command_runner.go:130] ! I0501 04:15:55.509506       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:56.464608    4352 command_runner.go:130] ! I0501 04:15:55.513444       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:56.464608    4352 command_runner.go:130] ! I0501 04:15:55.517356       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:56.464608    4352 command_runner.go:130] ! I0501 04:15:55.519269       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:56.464667    4352 command_runner.go:130] ! I0501 04:15:55.521379       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:56.464718    4352 command_runner.go:130] ! I0501 04:15:55.527109       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:56.464771    4352 command_runner.go:130] ! I0501 04:15:55.533712       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:56.464821    4352 command_runner.go:130] ! I0501 04:15:55.534052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:56.464884    4352 command_runner.go:130] ! I0501 04:15:55.562220       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:56.464884    4352 command_runner.go:130] ! I0501 04:15:55.562294       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:56.465020    4352 command_runner.go:130] ! I0501 04:15:55.562374       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:56.465081    4352 command_runner.go:130] ! I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:16:56.465122    4352 command_runner.go:130] ! I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:56.465122    4352 command_runner.go:130] ! I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:56.465183    4352 command_runner.go:130] ! I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:16:56.465240    4352 command_runner.go:130] ! I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:16:56.465300    4352 command_runner.go:130] ! I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:16:56.465371    4352 command_runner.go:130] ! I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:16:56.465371    4352 command_runner.go:130] ! I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:56.465428    4352 command_runner.go:130] ! I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:56.465481    4352 command_runner.go:130] ! I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:56.465537    4352 command_runner.go:130] ! I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:56.465592    4352 command_runner.go:130] ! I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:56.465592    4352 command_runner.go:130] ! I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:56.465651    4352 command_runner.go:130] ! I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:56.465704    4352 command_runner.go:130] ! I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:56.465704    4352 command_runner.go:130] ! I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.465746    4352 command_runner.go:130] ! I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:56.465861    4352 command_runner.go:130] ! I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:56.465968    4352 command_runner.go:130] ! I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:56.465968    4352 command_runner.go:130] ! I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:56.465968    4352 command_runner.go:130] ! I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:56.466069    4352 command_runner.go:130] ! I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:56.466069    4352 command_runner.go:130] ! I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:56.466107    4352 command_runner.go:130] ! I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:56.466150    4352 command_runner.go:130] ! I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	I0501 04:16:56.484885    4352 logs.go:123] Gathering logs for dmesg ...
	I0501 04:16:56.484885    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 04:16:56.513601    4352 command_runner.go:130] > [May 1 04:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0501 04:16:56.513601    4352 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0501 04:16:56.513601    4352 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0501 04:16:56.513601    4352 command_runner.go:130] > [  +0.128235] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0501 04:16:56.513747    4352 command_runner.go:130] > [  +0.023886] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0501 04:16:56.513819    4352 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0501 04:16:56.513875    4352 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0501 04:16:56.513875    4352 command_runner.go:130] > [  +0.057986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0501 04:16:56.513948    4352 command_runner.go:130] > [  +0.022012] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0501 04:16:56.513948    4352 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0501 04:16:56.513948    4352 command_runner.go:130] > [  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0501 04:16:56.513948    4352 command_runner.go:130] > [May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0501 04:16:56.514138    4352 command_runner.go:130] > [ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0501 04:16:56.514138    4352 command_runner.go:130] > [May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	I0501 04:16:56.514232    4352 command_runner.go:130] > [  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	I0501 04:16:56.514232    4352 command_runner.go:130] > [  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	I0501 04:16:56.514232    4352 command_runner.go:130] > [  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	I0501 04:16:56.514270    4352 command_runner.go:130] > [  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	I0501 04:16:56.514270    4352 command_runner.go:130] > [  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	I0501 04:16:59.025267    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:16:59.035373    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 200:
	ok
	I0501 04:16:59.035721    4352 round_trippers.go:463] GET https://172.28.209.199:8443/version
	I0501 04:16:59.035800    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:59.035800    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:59.035844    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:59.037152    4352 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 04:16:59.037152    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:59.037152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:59.037152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Content-Length: 263
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:59 GMT
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Audit-Id: 2404fd61-6bc6-467d-a785-d44e96b27036
	I0501 04:16:59.037152    4352 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 04:16:59.037152    4352 api_server.go:141] control plane version: v1.30.0
	I0501 04:16:59.037152    4352 api_server.go:131] duration metric: took 4.0329758s to wait for apiserver health ...
	I0501 04:16:59.037152    4352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 04:16:59.049812    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0501 04:16:59.079918    4352 command_runner.go:130] > 18cd30f3ad28
	I0501 04:16:59.080370    4352 logs.go:276] 1 containers: [18cd30f3ad28]
	I0501 04:16:59.091264    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0501 04:16:59.121244    4352 command_runner.go:130] > 34892fdb6898
	I0501 04:16:59.121244    4352 logs.go:276] 1 containers: [34892fdb6898]
	I0501 04:16:59.131230    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0501 04:16:59.164227    4352 command_runner.go:130] > b8a9b405d76b
	I0501 04:16:59.164227    4352 command_runner.go:130] > 8a0208aeafcf
	I0501 04:16:59.164227    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:16:59.164227    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:16:59.164818    4352 logs.go:276] 4 containers: [b8a9b405d76b 8a0208aeafcf 15c4496e3a9f 3e8d5ff9a9e4]
	I0501 04:16:59.175998    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0501 04:16:59.206001    4352 command_runner.go:130] > eaf69fce5ee3
	I0501 04:16:59.206001    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:16:59.210788    4352 logs.go:276] 2 containers: [eaf69fce5ee3 06f1f84bfde1]
	I0501 04:16:59.221911    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0501 04:16:59.260743    4352 command_runner.go:130] > 3efcc92f817e
	I0501 04:16:59.260743    4352 command_runner.go:130] > 502684407b0c
	I0501 04:16:59.260743    4352 logs.go:276] 2 containers: [3efcc92f817e 502684407b0c]
	I0501 04:16:59.270752    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0501 04:16:59.296707    4352 command_runner.go:130] > 66a1b89e6733
	I0501 04:16:59.296707    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:16:59.298599    4352 logs.go:276] 2 containers: [66a1b89e6733 4b62556f40be]
	I0501 04:16:59.309612    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0501 04:16:59.334619    4352 command_runner.go:130] > b7cae3f6b88b
	I0501 04:16:59.335632    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:16:59.335632    4352 logs.go:276] 2 containers: [b7cae3f6b88b 6d5f881ef398]
	I0501 04:16:59.335701    4352 logs.go:123] Gathering logs for dmesg ...
	I0501 04:16:59.335873    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 04:16:59.362549    4352 command_runner.go:130] > [May 1 04:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.128235] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.023886] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.057986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.022012] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0501 04:16:59.363192    4352 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0501 04:16:59.363192    4352 command_runner.go:130] > [  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0501 04:16:59.363192    4352 command_runner.go:130] > [May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0501 04:16:59.363192    4352 command_runner.go:130] > [  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0501 04:16:59.363192    4352 command_runner.go:130] > [  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0501 04:16:59.363263    4352 command_runner.go:130] > [ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0501 04:16:59.363263    4352 command_runner.go:130] > [May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	I0501 04:16:59.363508    4352 command_runner.go:130] > [  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	I0501 04:16:59.363508    4352 command_runner.go:130] > [  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	I0501 04:16:59.365198    4352 logs.go:123] Gathering logs for describe nodes ...
	I0501 04:16:59.365198    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 04:16:59.583417    4352 command_runner.go:130] > Name:               multinode-289800
	I0501 04:16:59.583466    4352 command_runner.go:130] > Roles:              control-plane
	I0501 04:16:59.583466    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:59.583642    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:59.583642    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:59.583642    4352 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0501 04:16:59.583693    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	I0501 04:16:59.583693    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:59.583693    4352 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0501 04:16:59.583772    4352 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0501 04:16:59.583772    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:59.583772    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:59.583772    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:59.583825    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	I0501 04:16:59.583825    4352 command_runner.go:130] > Taints:             <none>
	I0501 04:16:59.583825    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:59.583825    4352 command_runner.go:130] > Lease:
	I0501 04:16:59.583825    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800
	I0501 04:16:59.583825    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:59.583887    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:16:53 +0000
	I0501 04:16:59.583887    4352 command_runner.go:130] > Conditions:
	I0501 04:16:59.583887    4352 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0501 04:16:59.583887    4352 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0501 04:16:59.583977    4352 command_runner.go:130] >   MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0501 04:16:59.583977    4352 command_runner.go:130] >   DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0501 04:16:59.584008    4352 command_runner.go:130] >   PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0501 04:16:59.584008    4352 command_runner.go:130] >   Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	I0501 04:16:59.584008    4352 command_runner.go:130] > Addresses:
	I0501 04:16:59.584008    4352 command_runner.go:130] >   InternalIP:  172.28.209.199
	I0501 04:16:59.584008    4352 command_runner.go:130] >   Hostname:    multinode-289800
	I0501 04:16:59.584101    4352 command_runner.go:130] > Capacity:
	I0501 04:16:59.584101    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.584101    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.584101    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.584101    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.584151    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.584151    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:59.584151    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.584151    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.584151    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.584195    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.584195    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.584195    4352 command_runner.go:130] > System Info:
	I0501 04:16:59.584195    4352 command_runner.go:130] >   Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	I0501 04:16:59.584195    4352 command_runner.go:130] >   System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	I0501 04:16:59.584246    4352 command_runner.go:130] >   Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	I0501 04:16:59.584246    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:59.584246    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:59.584246    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:59.584311    4352 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0501 04:16:59.584311    4352 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0501 04:16:59.584370    4352 command_runner.go:130] > Non-terminated Pods:          (10 in total)
	I0501 04:16:59.584370    4352 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:59.584415    4352 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0501 04:16:59.584415    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-cc6mk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0501 04:16:59.584415    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0501 04:16:59.584470    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 etcd-multinode-289800                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         77s
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kindnet-vcxkr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-289800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-289800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-proxy-bp9zx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-289800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:59.584568    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Resource           Requests     Limits
	I0501 04:16:59.584568    4352 command_runner.go:130] >   --------           --------     ------
	I0501 04:16:59.584568    4352 command_runner.go:130] >   cpu                950m (47%)   100m (5%)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   memory             290Mi (13%)  390Mi (18%)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0501 04:16:59.584568    4352 command_runner.go:130] > Events:
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:59.584568    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0501 04:16:59.585120    4352 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:59.585120    4352 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-289800 status is now: NodeReady
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 83s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:59.585188    4352 command_runner.go:130] > Name:               multinode-289800-m02
	I0501 04:16:59.585188    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:59.585188    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:59.585188    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:59.585188    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:59.585188    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m02
	I0501 04:16:59.585349    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:59.585349    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:59.585415    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:59.585415    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:59.585459    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	I0501 04:16:59.585459    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:59.585459    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:59.585518    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:59.585518    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:59.585518    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	I0501 04:16:59.585573    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:59.585573    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:59.585637    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:59.585688    4352 command_runner.go:130] > Lease:
	I0501 04:16:59.585688    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m02
	I0501 04:16:59.585688    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:59.585688    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	I0501 04:16:59.585688    4352 command_runner.go:130] > Conditions:
	I0501 04:16:59.585688    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:59.585795    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:59.585795    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] > Addresses:
	I0501 04:16:59.585897    4352 command_runner.go:130] >   InternalIP:  172.28.219.162
	I0501 04:16:59.585897    4352 command_runner.go:130] >   Hostname:    multinode-289800-m02
	I0501 04:16:59.585897    4352 command_runner.go:130] > Capacity:
	I0501 04:16:59.585897    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.585897    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.585897    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.585897    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.585955    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.585955    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:59.585955    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.585955    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.586010    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.586010    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.586010    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.586049    4352 command_runner.go:130] > System Info:
	I0501 04:16:59.586049    4352 command_runner.go:130] >   Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	I0501 04:16:59.586069    4352 command_runner.go:130] >   System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	I0501 04:16:59.586069    4352 command_runner.go:130] >   Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	I0501 04:16:59.586069    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:59.586115    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:59.586115    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:59.586115    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:59.586156    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:59.586156    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:59.586156    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:59.586156    4352 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0501 04:16:59.586156    4352 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0501 04:16:59.586156    4352 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0501 04:16:59.586224    4352 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:59.586224    4352 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0501 04:16:59.586224    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-tbxxx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0501 04:16:59.586224    4352 command_runner.go:130] >   kube-system                 kindnet-gzz7p              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0501 04:16:59.586283    4352 command_runner.go:130] >   kube-system                 kube-proxy-rlzp8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0501 04:16:59.586283    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:59.586283    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:59.586283    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:59.586283    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:59.586283    4352 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0501 04:16:59.586359    4352 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0501 04:16:59.586359    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0501 04:16:59.586359    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0501 04:16:59.586359    4352 command_runner.go:130] > Events:
	I0501 04:16:59.586359    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:59.586415    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:59.586415    4352 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0501 04:16:59.586415    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	I0501 04:16:59.586415    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.586475    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	I0501 04:16:59.586475    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.586532    4352 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:59.586532    4352 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	I0501 04:16:59.586532    4352 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:59.586590    4352 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	I0501 04:16:59.586590    4352 command_runner.go:130] > Name:               multinode-289800-m03
	I0501 04:16:59.586590    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:59.586590    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:59.586643    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:59.586643    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:59.586643    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m03
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	I0501 04:16:59.586756    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:59.586756    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:59.586756    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:59.586756    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:59.586814    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	I0501 04:16:59.586814    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:59.586814    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:59.586814    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:59.586814    4352 command_runner.go:130] > Lease:
	I0501 04:16:59.586868    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m03
	I0501 04:16:59.586868    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:59.586868    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	I0501 04:16:59.586868    4352 command_runner.go:130] > Conditions:
	I0501 04:16:59.586868    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:59.586924    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:59.586924    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.586978    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.586978    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.586978    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.587035    4352 command_runner.go:130] > Addresses:
	I0501 04:16:59.587035    4352 command_runner.go:130] >   InternalIP:  172.28.223.145
	I0501 04:16:59.587035    4352 command_runner.go:130] >   Hostname:    multinode-289800-m03
	I0501 04:16:59.587035    4352 command_runner.go:130] > Capacity:
	I0501 04:16:59.587035    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.587035    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.587090    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.587090    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.587090    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.587090    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:59.587090    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.587148    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.587148    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.587148    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.587148    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.587148    4352 command_runner.go:130] > System Info:
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	I0501 04:16:59.587216    4352 command_runner.go:130] >   System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:59.587216    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:59.587278    4352 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0501 04:16:59.587330    4352 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0501 04:16:59.587330    4352 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0501 04:16:59.587330    4352 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:59.587330    4352 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0501 04:16:59.587330    4352 command_runner.go:130] >   kube-system                 kindnet-4m5vg       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0501 04:16:59.587432    4352 command_runner.go:130] >   kube-system                 kube-proxy-g8mbm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0501 04:16:59.587432    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:59.587432    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:59.587432    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:59.587432    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:59.587432    4352 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0501 04:16:59.587487    4352 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0501 04:16:59.587487    4352 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0501 04:16:59.587487    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0501 04:16:59.587487    4352 command_runner.go:130] > Events:
	I0501 04:16:59.587550    4352 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0501 04:16:59.587607    4352 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0501 04:16:59.587607    4352 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0501 04:16:59.587607    4352 command_runner.go:130] >   Normal  Starting                 5m52s                  kube-proxy       
	I0501 04:16:59.587607    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.587740    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:59.587740    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.587740    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:59.587804    4352 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:59.587804    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m55s (x2 over 5m55s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:59.587804    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m55s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.587857    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m55s (x2 over 5m55s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:59.587857    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.587857    4352 command_runner.go:130] >   Normal  RegisteredNode           5m50s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:59.587916    4352 command_runner.go:130] >   Normal  NodeReady                5m48s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:59.587916    4352 command_runner.go:130] >   Normal  NodeNotReady             4m10s                  node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	I0501 04:16:59.587916    4352 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:59.599470    4352 logs.go:123] Gathering logs for kube-scheduler [06f1f84bfde1] ...
	I0501 04:16:59.599470    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f1f84bfde1"
	I0501 04:16:59.629820    4352 command_runner.go:130] ! I0501 03:52:10.476758       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.629820    4352 command_runner.go:130] ! W0501 03:52:12.175400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:59.630769    4352 command_runner.go:130] ! W0501 03:52:12.175551       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.630848    4352 command_runner.go:130] ! W0501 03:52:12.175587       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:59.630888    4352 command_runner.go:130] ! W0501 03:52:12.175678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:59.630912    4352 command_runner.go:130] ! I0501 03:52:12.246151       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:59.630934    4352 command_runner.go:130] ! I0501 03:52:12.246312       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.630934    4352 command_runner.go:130] ! I0501 03:52:12.251800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:59.630934    4352 command_runner.go:130] ! I0501 03:52:12.252170       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.630976    4352 command_runner.go:130] ! I0501 03:52:12.253709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.630976    4352 command_runner.go:130] ! I0501 03:52:12.254160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:59.630976    4352 command_runner.go:130] ! W0501 03:52:12.257352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.631041    4352 command_runner.go:130] ! E0501 03:52:12.257411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.631100    4352 command_runner.go:130] ! W0501 03:52:12.261549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.631124    4352 command_runner.go:130] ! E0501 03:52:12.261670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! W0501 03:52:12.263856       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.631152    4352 command_runner.go:130] ! E0501 03:52:12.263906       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.631152    4352 command_runner.go:130] ! W0501 03:52:12.270038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! E0501 03:52:12.270597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! W0501 03:52:12.271080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! E0501 03:52:12.271309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631690    4352 command_runner.go:130] ! W0501 03:52:12.271808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.631690    4352 command_runner.go:130] ! E0501 03:52:12.272098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.631785    4352 command_runner.go:130] ! W0501 03:52:12.272396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.631785    4352 command_runner.go:130] ! W0501 03:52:12.273177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.631785    4352 command_runner.go:130] ! E0501 03:52:12.273396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.631905    4352 command_runner.go:130] ! W0501 03:52:12.273765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.631905    4352 command_runner.go:130] ! E0501 03:52:12.273964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.632039    4352 command_runner.go:130] ! W0501 03:52:12.274273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632039    4352 command_runner.go:130] ! E0501 03:52:12.274741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632096    4352 command_runner.go:130] ! E0501 03:52:12.275083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.632141    4352 command_runner.go:130] ! W0501 03:52:12.275448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632192    4352 command_runner.go:130] ! E0501 03:52:12.275752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632245    4352 command_runner.go:130] ! W0501 03:52:12.276841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632295    4352 command_runner.go:130] ! E0501 03:52:12.278071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632348    4352 command_runner.go:130] ! W0501 03:52:12.277438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.632447    4352 command_runner.go:130] ! E0501 03:52:12.278555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:12.279824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:12.280326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:12.280272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:12.280893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.100723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.101238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.102451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.102804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.188414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.188662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632996    4352 command_runner.go:130] ! W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.633046    4352 command_runner.go:130] ! E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.633046    4352 command_runner.go:130] ! W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.633046    4352 command_runner.go:130] ! E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.633046    4352 command_runner.go:130] ! W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633201    4352 command_runner.go:130] ! E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633759    4352 command_runner.go:130] ! E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633807    4352 command_runner.go:130] ! W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.633874    4352 command_runner.go:130] ! E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.633874    4352 command_runner.go:130] ! W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.633874    4352 command_runner.go:130] ! E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.633930    4352 command_runner.go:130] ! I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.633953    4352 command_runner.go:130] ! E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	I0501 04:16:59.645635    4352 logs.go:123] Gathering logs for coredns [8a0208aeafcf] ...
	I0501 04:16:59.645635    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0208aeafcf"
	I0501 04:16:59.676846    4352 command_runner.go:130] > .:53
	I0501 04:16:59.676914    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:59.676914    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:59.676914    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:59.676914    4352 command_runner.go:130] > [INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	I0501 04:16:59.677647    4352 logs.go:123] Gathering logs for coredns [15c4496e3a9f] ...
	I0501 04:16:59.677749    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15c4496e3a9f"
	I0501 04:16:59.713229    4352 command_runner.go:130] > .:53
	I0501 04:16:59.713339    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:59.713339    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:59.713339    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:59.713339    4352 command_runner.go:130] > [INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	I0501 04:16:59.713568    4352 command_runner.go:130] > [INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	I0501 04:16:59.713885    4352 command_runner.go:130] > [INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	I0501 04:16:59.713885    4352 command_runner.go:130] > [INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	I0501 04:16:59.713934    4352 command_runner.go:130] > [INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	I0501 04:16:59.713956    4352 command_runner.go:130] > [INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	I0501 04:16:59.713956    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:59.713956    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:59.715255    4352 logs.go:123] Gathering logs for coredns [3e8d5ff9a9e4] ...
	I0501 04:16:59.715255    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8d5ff9a9e4"
	I0501 04:16:59.747892    4352 command_runner.go:130] > .:53
	I0501 04:16:59.748016    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:59.748016    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:59.748016    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:59.748016    4352 command_runner.go:130] > [INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	I0501 04:16:59.748016    4352 command_runner.go:130] > [INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	I0501 04:16:59.748185    4352 command_runner.go:130] > [INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	I0501 04:16:59.748185    4352 command_runner.go:130] > [INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	I0501 04:16:59.748185    4352 command_runner.go:130] > [INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	I0501 04:16:59.748452    4352 command_runner.go:130] > [INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	I0501 04:16:59.748671    4352 command_runner.go:130] > [INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	I0501 04:16:59.748696    4352 command_runner.go:130] > [INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	I0501 04:16:59.748696    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:59.748696    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:59.750256    4352 logs.go:123] Gathering logs for kube-proxy [3efcc92f817e] ...
	I0501 04:16:59.750256    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efcc92f817e"
	I0501 04:16:59.781847    4352 command_runner.go:130] ! I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:59.782462    4352 command_runner.go:130] ! I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:59.782556    4352 command_runner.go:130] ! I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:59.782595    4352 command_runner.go:130] ! I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:59.784841    4352 logs.go:123] Gathering logs for kube-scheduler [eaf69fce5ee3] ...
	I0501 04:16:59.784841    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaf69fce5ee3"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.812842    4352 command_runner.go:130] ! I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:59.812842    4352 command_runner.go:130] ! I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.812842    4352 command_runner.go:130] ! I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.814835    4352 logs.go:123] Gathering logs for kube-controller-manager [66a1b89e6733] ...
	I0501 04:16:59.814835    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1b89e6733"
	I0501 04:16:59.844871    4352 command_runner.go:130] ! I0501 04:15:39.740014       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.254324       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.254368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.263842       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.264273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.265102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.265435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:44.420436       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:59.845407    4352 command_runner.go:130] ! I0501 04:15:44.421597       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:59.845407    4352 command_runner.go:130] ! I0501 04:15:44.430683       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.430949       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.431056       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.437281       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.440408       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.437711       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.440933       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.450877       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.452935       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.452958       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.458231       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.458525       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.458548       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.467611       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.468036       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.468093       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.468107       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.484825       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:59.845812    4352 command_runner.go:130] ! I0501 04:15:44.484856       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.845812    4352 command_runner.go:130] ! I0501 04:15:44.484892       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485186       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485221       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485322       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.488601       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.488943       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.488958       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.488985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.523143       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:59.846100    4352 command_runner.go:130] ! I0501 04:15:44.644894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:59.846100    4352 command_runner.go:130] ! I0501 04:15:44.645016       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:59.846100    4352 command_runner.go:130] ! I0501 04:15:44.645088       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:59.846164    4352 command_runner.go:130] ! I0501 04:15:44.645112       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:59.846164    4352 command_runner.go:130] ! I0501 04:15:44.646888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:59.846164    4352 command_runner.go:130] ! W0501 04:15:44.646984       1 shared_informer.go:597] resyncPeriod 15h44m19.234758052s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:59.846164    4352 command_runner.go:130] ! W0501 04:15:44.647035       1 shared_informer.go:597] resyncPeriod 17h52m42.538614251s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647224       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647580       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648519       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648582       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648601       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648662       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.649971       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.649999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.650094       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.658545       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.664070       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.664109       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.672333       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.672648       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.673224       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:59.846270    4352 command_runner.go:130] ! E0501 04:15:44.680086       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.680207       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.686271       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.687804       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.688087       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.691064       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.694139       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.694154       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.697309       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.697808       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.698725       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.709020       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.709557       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.718572       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.718866       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731386       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731502       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731520       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731794       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.732008       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.732024       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.732060       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.739601       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.741937       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.742091       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.751335       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.758177       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.767021       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.776399       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.777830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.780031       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.783346       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.784386       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.784668       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.790586       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.791028       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.791148       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.795072       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.795486       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.796321       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.806964       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.807399       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.808302       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.810677       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.811276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.812128       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.814338       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.814699       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.815465       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.818437       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.819004       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.818976       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.820305       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.820518       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.822359       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.824878       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.825167       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.835687       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.835705       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.835739       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.836623       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! E0501 04:15:44.845522       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.845590       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.975590       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.975737       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.026863       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.026966       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.026980       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.188029       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.191154       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.191606       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.234916       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.235592       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.235855       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.275946       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.276219       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.277151       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.277668       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347039       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347226       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347657       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347697       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.351170       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.351453       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.351701       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.352658       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.355868       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.356195       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.356581       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.373530       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.375966       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.376087       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.376099       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.381581       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.387752       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.398512       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.398855       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.433745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.433841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.434861       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.437855       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438314       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438445       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.441880       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.442281       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448289       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448378       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448532       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448615       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.452662       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.453060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.453136       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.459094       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.465378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.468998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.476103       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.479405       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.480400       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.485347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.485423       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.485459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.488987       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.489270       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.492066       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.492447       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.494972       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.497059       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.499153       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.499594       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.509506       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.513444       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.517356       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.519269       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.521379       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.527109       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.533712       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.534052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562220       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562294       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562374       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	I0501 04:16:59.865835    4352 logs.go:123] Gathering logs for kube-controller-manager [4b62556f40be] ...
	I0501 04:16:59.865835    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b62556f40be"
	I0501 04:16:59.904905    4352 command_runner.go:130] ! I0501 03:52:09.899238       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.904905    4352 command_runner.go:130] ! I0501 03:52:10.399398       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.399463       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.408364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.409326       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.409600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.409803       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.177592       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.177638       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.223373       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.223482       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.224504       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.255847       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.268264       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.268388       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.282022       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.318646       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.318861       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.319086       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.319104       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.319092       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.340327       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.340404       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.340939       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.388809       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.389274       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.389544       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.409254       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.409799       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.410052       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.410231       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:59.906108    4352 command_runner.go:130] ! I0501 03:52:15.430420       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:59.906164    4352 command_runner.go:130] ! I0501 03:52:15.432551       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:59.906164    4352 command_runner.go:130] ! I0501 03:52:15.432922       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:59.906164    4352 command_runner.go:130] ! I0501 03:52:15.433117       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:59.906224    4352 command_runner.go:130] ! E0501 03:52:15.460293       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.460569       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.483810       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.484552       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.487659       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.507112       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.507311       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.507323       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547225       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547300       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547313       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547413       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.652954       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.653222       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.653240       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940714       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940787       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941029       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941118       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:59.906897    4352 command_runner.go:130] ! I0501 03:52:15.941344       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:59.906897    4352 command_runner.go:130] ! I0501 03:52:15.941368       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:59.906951    4352 command_runner.go:130] ! I0501 03:52:15.941386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:59.906951    4352 command_runner.go:130] ! I0501 03:52:15.941421       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:59.907011    4352 command_runner.go:130] ! I0501 03:52:15.941561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:59.907011    4352 command_runner.go:130] ! I0501 03:52:15.941606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:59.907011    4352 command_runner.go:130] ! I0501 03:52:15.941627       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:59.907079    4352 command_runner.go:130] ! I0501 03:52:15.941813       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:59.907079    4352 command_runner.go:130] ! I0501 03:52:15.942150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942319       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942400       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942767       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:15.942791       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.183841       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.184178       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.187151       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.187185       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:59.907276    4352 command_runner.go:130] ! I0501 03:52:16.436175       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:59.907276    4352 command_runner.go:130] ! I0501 03:52:16.436331       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.436346       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.586198       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.586602       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.586642       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.736534       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:59.907434    4352 command_runner.go:130] ! I0501 03:52:16.736573       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:59.907434    4352 command_runner.go:130] ! I0501 03:52:16.736609       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:59.907504    4352 command_runner.go:130] ! I0501 03:52:16.736694       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:59.907504    4352 command_runner.go:130] ! I0501 03:52:16.736706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:59.907504    4352 command_runner.go:130] ! I0501 03:52:16.891482       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:59.907575    4352 command_runner.go:130] ! I0501 03:52:16.891648       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:59.907575    4352 command_runner.go:130] ! I0501 03:52:16.891663       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:59.907575    4352 command_runner.go:130] ! I0501 03:52:17.047956       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050852       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050942       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:59.907717    4352 command_runner.go:130] ! I0501 03:52:17.051046       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:59.907717    4352 command_runner.go:130] ! I0501 03:52:17.051073       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.907717    4352 command_runner.go:130] ! I0501 03:52:17.051107       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:59.907781    4352 command_runner.go:130] ! I0501 03:52:17.051130       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.907781    4352 command_runner.go:130] ! I0501 03:52:17.051145       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907781    4352 command_runner.go:130] ! I0501 03:52:17.051309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.051548       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.051654       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.186932       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.187092       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:59.908020    4352 command_runner.go:130] ! I0501 03:52:27.350786       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:59.908085    4352 command_runner.go:130] ! I0501 03:52:27.351166       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:59.908142    4352 command_runner.go:130] ! I0501 03:52:27.352026       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:59.908142    4352 command_runner.go:130] ! I0501 03:52:27.353715       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:59.908142    4352 command_runner.go:130] ! I0501 03:52:27.368884       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:59.908194    4352 command_runner.go:130] ! I0501 03:52:27.369241       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:59.908194    4352 command_runner.go:130] ! I0501 03:52:27.369602       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:59.908194    4352 command_runner.go:130] ! I0501 03:52:27.424182       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.424472       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.436663       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.437080       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.437177       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.448635       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.449170       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:59.908325    4352 command_runner.go:130] ! I0501 03:52:27.449409       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:59.908325    4352 command_runner.go:130] ! I0501 03:52:27.475565       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.908357    4352 command_runner.go:130] ! I0501 03:52:27.476051       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.908357    4352 command_runner.go:130] ! I0501 03:52:27.476166       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:59.908357    4352 command_runner.go:130] ! I0501 03:52:27.479486       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:59.908433    4352 command_runner.go:130] ! I0501 03:52:27.479596       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:59.908464    4352 command_runner.go:130] ! I0501 03:52:27.479975       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:59.908464    4352 command_runner.go:130] ! I0501 03:52:27.480750       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:59.908464    4352 command_runner.go:130] ! I0501 03:52:27.480823       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:59.908464    4352 command_runner.go:130] ! E0501 03:52:27.482546       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.483210       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.495640       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.495973       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.496212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.512223       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:59.908625    4352 command_runner.go:130] ! I0501 03:52:27.512895       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:59.908625    4352 command_runner.go:130] ! I0501 03:52:27.513075       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:59.908666    4352 command_runner.go:130] ! I0501 03:52:27.514982       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:59.908666    4352 command_runner.go:130] ! I0501 03:52:27.515311       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:59.908666    4352 command_runner.go:130] ! I0501 03:52:27.515499       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:59.908739    4352 command_runner.go:130] ! I0501 03:52:27.526940       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:59.908739    4352 command_runner.go:130] ! I0501 03:52:27.527318       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:59.908770    4352 command_runner.go:130] ! I0501 03:52:27.527351       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:59.908770    4352 command_runner.go:130] ! I0501 03:52:27.647646       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:59.908770    4352 command_runner.go:130] ! I0501 03:52:27.647752       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:59.908838    4352 command_runner.go:130] ! I0501 03:52:27.647825       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:59.908838    4352 command_runner.go:130] ! I0501 03:52:27.647836       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:59.908876    4352 command_runner.go:130] ! I0501 03:52:27.692531       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:59.908876    4352 command_runner.go:130] ! I0501 03:52:27.692762       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:59.908957    4352 command_runner.go:130] ! I0501 03:52:27.693221       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:59.908982    4352 command_runner.go:130] ! I0501 03:52:27.693310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:59.908982    4352 command_runner.go:130] ! I0501 03:52:27.846904       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:59.909026    4352 command_runner.go:130] ! I0501 03:52:27.847065       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:59.909026    4352 command_runner.go:130] ! I0501 03:52:27.847083       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:27.996304       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:27.996661       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:27.996720       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:28.149439       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:28.149690       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:59.909152    4352 command_runner.go:130] ! I0501 03:52:28.149796       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.194448       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.194582       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.346263       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.351074       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:59.909262    4352 command_runner.go:130] ! I0501 03:52:28.351267       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:59.909262    4352 command_runner.go:130] ! I0501 03:52:28.389327       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.909262    4352 command_runner.go:130] ! I0501 03:52:28.399508       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:59.909301    4352 command_runner.go:130] ! I0501 03:52:28.401911       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:59.909301    4352 command_runner.go:130] ! I0501 03:52:28.402772       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:59.909301    4352 command_runner.go:130] ! I0501 03:52:28.414043       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:59.909351    4352 command_runner.go:130] ! I0501 03:52:28.415874       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:59.909391    4352 command_runner.go:130] ! I0501 03:52:28.427291       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:59.909391    4352 command_runner.go:130] ! I0501 03:52:28.436570       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.437221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.437315       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.440984       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.447483       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:59.909486    4352 command_runner.go:130] ! I0501 03:52:28.447500       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:59.909486    4352 command_runner.go:130] ! I0501 03:52:28.448218       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:59.909523    4352 command_runner.go:130] ! I0501 03:52:28.451115       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.909523    4352 command_runner.go:130] ! I0501 03:52:28.451167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451224       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451726       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451933       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451734       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:59.909634    4352 command_runner.go:130] ! I0501 03:52:28.470928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:59.909634    4352 command_runner.go:130] ! I0501 03:52:28.476835       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:59.909674    4352 command_runner.go:130] ! I0501 03:52:28.486851       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:59.909674    4352 command_runner.go:130] ! I0501 03:52:28.487294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:59.909674    4352 command_runner.go:130] ! I0501 03:52:28.507418       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.510921       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.537591       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.575135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.595083       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.909788    4352 command_runner.go:130] ! I0501 03:52:28.609954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:59.909825    4352 command_runner.go:130] ! I0501 03:52:28.621070       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:59.909825    4352 command_runner.go:130] ! I0501 03:52:28.625042       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.628085       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.643871       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.653497       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.654871       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.654996       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.655710       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.655972       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.656192       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.675109       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800" podCIDRs=["10.244.0.0/24"]
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.682120       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:59.910028    4352 command_runner.go:130] ! I0501 03:52:28.682644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:59.910028    4352 command_runner.go:130] ! I0501 03:52:28.682782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:59.910028    4352 command_runner.go:130] ! I0501 03:52:28.682855       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:28.688787       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:28.693874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:28.697526       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.088696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.088746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.139257       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910786    4352 command_runner.go:130] ! I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:16:59.910786    4352 command_runner.go:130] ! I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:59.910841    4352 command_runner.go:130] ! I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910841    4352 command_runner.go:130] ! I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910841    4352 command_runner.go:130] ! I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910952    4352 command_runner.go:130] ! I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910952    4352 command_runner.go:130] ! I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:59.911015    4352 command_runner.go:130] ! I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:16:59.911015    4352 command_runner.go:130] ! I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.911054    4352 command_runner.go:130] ! I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.930817    4352 logs.go:123] Gathering logs for kindnet [b7cae3f6b88b] ...
	I0501 04:16:59.930817    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7cae3f6b88b"
	I0501 04:16:59.961646    4352 command_runner.go:130] ! I0501 04:15:45.341459       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 04:16:59.961646    4352 command_runner.go:130] ! I0501 04:15:45.342196       1 main.go:107] hostIP = 172.28.209.199
	I0501 04:16:59.962058    4352 command_runner.go:130] ! podIP = 172.28.209.199
	I0501 04:16:59.962058    4352 command_runner.go:130] ! I0501 04:15:45.343348       1 main.go:116] setting mtu 1500 for CNI 
	I0501 04:16:59.962058    4352 command_runner.go:130] ! I0501 04:15:45.343391       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 04:16:59.962058    4352 command_runner.go:130] ! I0501 04:15:45.343412       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.765193       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.817499       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.817549       1 main.go:227] handling current node
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.818026       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.818042       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962226    4352 command_runner.go:130] ! I0501 04:16:15.818289       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.219.162 Flags: [] Table: 0} 
	I0501 04:16:59.962226    4352 command_runner.go:130] ! I0501 04:16:15.818416       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962270    4352 command_runner.go:130] ! I0501 04:16:15.818477       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962270    4352 command_runner.go:130] ! I0501 04:16:15.818548       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:59.962270    4352 command_runner.go:130] ! I0501 04:16:25.834949       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962325    4352 command_runner.go:130] ! I0501 04:16:25.834995       1 main.go:227] handling current node
	I0501 04:16:59.962325    4352 command_runner.go:130] ! I0501 04:16:25.835008       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962366    4352 command_runner.go:130] ! I0501 04:16:25.835016       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962622    4352 command_runner.go:130] ! I0501 04:16:25.835192       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962675    4352 command_runner.go:130] ! I0501 04:16:25.835220       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962718    4352 command_runner.go:130] ! I0501 04:16:35.845752       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962718    4352 command_runner.go:130] ! I0501 04:16:35.845835       1 main.go:227] handling current node
	I0501 04:16:59.962718    4352 command_runner.go:130] ! I0501 04:16:35.845848       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962775    4352 command_runner.go:130] ! I0501 04:16:35.845856       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962775    4352 command_runner.go:130] ! I0501 04:16:35.846322       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962775    4352 command_runner.go:130] ! I0501 04:16:35.846423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855212       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855323       1 main.go:227] handling current node
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855339       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855347       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.856266       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962889    4352 command_runner.go:130] ! I0501 04:16:45.856305       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962889    4352 command_runner.go:130] ! I0501 04:16:55.872191       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962932    4352 command_runner.go:130] ! I0501 04:16:55.872239       1 main.go:227] handling current node
	I0501 04:16:59.962932    4352 command_runner.go:130] ! I0501 04:16:55.872253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962932    4352 command_runner.go:130] ! I0501 04:16:55.872260       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.963000    4352 command_runner.go:130] ! I0501 04:16:55.872517       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.963000    4352 command_runner.go:130] ! I0501 04:16:55.872553       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.965772    4352 logs.go:123] Gathering logs for Docker ...
	I0501 04:16:59.965772    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0501 04:17:00.007637    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.007758    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.653438562Z" level=info msg="Starting up"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.657791992Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.663198880Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.702542137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732549261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732711054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732864148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732947945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734019203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734463486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735002764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735178358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735234755Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735254555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735695937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.736590002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739236298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739286896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.008356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739479489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.008356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739575785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:17:00.008356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740111064Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740186861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740203361Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747848861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747973456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748003155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748021254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:17:00.008616    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748087351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:17:00.008616    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748176348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:17:00.008616    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748553033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.008682    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748726426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.008682    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748830822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:17:00.008745    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748853521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:17:00.008745    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748872121Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008745    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748887020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008807    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748901420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008807    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748916819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008807    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748932318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008872    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748946618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008872    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748960717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008872    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748974817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748996916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749013215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749071613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749094412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749109411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749127511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749141410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749156310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749171209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009107    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749210407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009107    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009107    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749241106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009179    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749261705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:17:00.009179    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749287004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009179    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749377501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009245    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749401900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:17:00.009245    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749458198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:17:00.009309    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749553894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:17:00.009309    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749626691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:17:00.009478    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749759886Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:17:00.009543    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749839283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749953278Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749974077Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750421860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750811045Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751024636Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751103833Z" level=info msg="containerd successfully booted in 0.052926s"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.725111442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.993003995Z" level=info msg="Loading containers: start."
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.418709237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.511990518Z" level=info msg="Loading containers: done."
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.539659513Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.540534438Z" level=info msg="Daemon has completed initialization"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.598935417Z" level=info msg="API listen on [::]:2376"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.599463032Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.764446334Z" level=info msg="Processing signal 'terminated'"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 systemd[1]: Stopping Docker Application Container Engine...
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766325752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766547266Z" level=info msg="Daemon shutdown complete"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766599570Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766627071Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: docker.service: Deactivated successfully.
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Stopped Docker Application Container Engine.
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.848356633Z" level=info msg="Starting up"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.852105170Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.856097222Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1051
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.886653253Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918280652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918435561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:17:00.010124    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918674977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:17:00.010124    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918835587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918914392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919007298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919224411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919342019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919363920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919374921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010328    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919401422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010328    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919522430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010328    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922355909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010417    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922472116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010417    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922606725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922701131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922740333Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922844740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922863441Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:17:00.010558    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923199662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:17:00.010558    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923345572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:17:00.010558    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923371973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:17:00.010625    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923387074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:17:00.010625    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923416076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:17:00.010625    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923482380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:17:00.010693    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923717595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.010732    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923914208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.010756    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924012314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:17:00.010756    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924084218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924103120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924116520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924137922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924154823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924172824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010919    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924195925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010919    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924208026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010985    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924219327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010985    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.010985    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924285031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924297632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924325534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924348235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011187    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924360536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011187    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924373137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011187    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924390538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011255    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924403039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011255    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924414139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011315    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924426140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011315    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924440741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:17:00.011315    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924459642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011382    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924475143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924504745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924545247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924640554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924658655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924671555Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924736560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924890569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924908370Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925252392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925540810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925606615Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925720522Z" level=info msg="containerd successfully booted in 0.040328s"
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.902259635Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.938734241Z" level=info msg="Loading containers: start."
	I0501 04:17:00.015064    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.252276255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:17:00.015064    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.346319398Z" level=info msg="Loading containers: done."
	I0501 04:17:00.015112    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374198460Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:17:00.015112    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374439776Z" level=info msg="Daemon has completed initialization"
	I0501 04:17:00.015154    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424572544Z" level=info msg="API listen on [::]:2376"
	I0501 04:17:00.015154    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424740154Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:17:00.015154    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Loaded network plugin cni"
	I0501 04:17:00.015360    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0501 04:17:00.015360    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-8w9hq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88\""
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-cc6mk_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5\""
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-x9zrw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193\""
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.812954162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813140474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813251281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813750813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908552604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908932028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908977330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.909354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8e27176eab83655d3f2a52c63326669ef8c796c68155930f53f421789d826f1/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022633513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022720619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022735220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.024008700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032104212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015937    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032117713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032205718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fd53aa8d8f5d6402b604adf1c8c8ae2b5a8c80b90e94152f45e7cb16a71fe46/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51e331e75da779107616d5efa0d497152d9c85407f1c172c9ae536bcc2b22bad/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.361204210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366294631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366382437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366929671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427356590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427966129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428178542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428971092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563334483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563717708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568278296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568462908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619028803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619423228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619676644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.620258481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.647452681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648388440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648703160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650660084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650945902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.652733715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.653556567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703188303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703325612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703348713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.704951615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160153282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160628512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160751120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.161166246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017174    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.017221    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303671652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017221    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303759357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017292    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304597710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017292    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304856126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623383256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623630372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623719877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.624154405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1045]: time="2024-05-01T04:16:15.086534690Z" level=info msg="ignoring event" container=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087315924Z" level=info msg="shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087789544Z" level=warning msg="cleaning up after shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.089400515Z" level=info msg="cleaning up dead shim" namespace=moby
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233206077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233350185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233373086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.235465402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.458837761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.459864323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464281891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464897329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543149980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543283788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543320690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543548404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598181021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598854262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017881    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.599065375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017881    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.600816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b85f507755ab5fd65a5328f5567d969dd5f974c01ee4c5d8e38f03dc6ec900a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.282921443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283150129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283743193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.291296831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360201124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360588900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360677995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.361100969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575166498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575320589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.576248232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.019404    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.019404    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.055426    4352 logs.go:123] Gathering logs for container status ...
	I0501 04:17:00.056434    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 04:17:00.124376    4352 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0501 04:17:00.124376    4352 command_runner.go:130] > 1efd236274eb6       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	I0501 04:17:00.124499    4352 command_runner.go:130] > b8a9b405d76be       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	I0501 04:17:00.124499    4352 command_runner.go:130] > 8a0208aeafcfe       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	I0501 04:17:00.124499    4352 command_runner.go:130] > 239a5dfd3ae52       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	I0501 04:17:00.124499    4352 command_runner.go:130] > b7cae3f6b88bc       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	I0501 04:17:00.124605    4352 command_runner.go:130] > 01deddefba52a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	I0501 04:17:00.124605    4352 command_runner.go:130] > 3efcc92f817ee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	I0501 04:17:00.124669    4352 command_runner.go:130] > 34892fdb68983       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	I0501 04:17:00.124750    4352 command_runner.go:130] > 18cd30f3ad28f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	I0501 04:17:00.124750    4352 command_runner.go:130] > 66a1b89e6733f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	I0501 04:17:00.124810    4352 command_runner.go:130] > eaf69fce5ee36       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	I0501 04:17:00.124844    4352 command_runner.go:130] > 237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	I0501 04:17:00.124874    4352 command_runner.go:130] > 15c4496e3a9f0       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	I0501 04:17:00.124874    4352 command_runner.go:130] > 3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	I0501 04:17:00.124874    4352 command_runner.go:130] > 6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago       Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	I0501 04:17:00.124981    4352 command_runner.go:130] > 502684407b0cf       a0bf559e280cf                                                                                         24 minutes ago       Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	I0501 04:17:00.124981    4352 command_runner.go:130] > 4b62556f40bec       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	I0501 04:17:00.124981    4352 command_runner.go:130] > 06f1f84bfde17       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	I0501 04:17:00.127772    4352 logs.go:123] Gathering logs for kubelet ...
	I0501 04:17:00.127772    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 04:17:00.160771    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161436    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875075    1383 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161436    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875223    1383 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161436    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.876800    1383 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161532    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: E0501 04:15:32.877636    1383 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:17:00.161532    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.161565    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:17:00.161565    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0501 04:17:00.161603    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.593311    1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.595065    1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.597316    1424 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: E0501 04:15:33.597441    1424 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327211    1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327674    1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.328505    1461 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: E0501 04:15:34.328669    1461 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.796836    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797219    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797640    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.799493    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.812278    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846443    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846668    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847577    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847671    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-289800","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848600    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848674    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.849347    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851250    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851388    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851480    1525 kubelet.go:312] "Adding apiserver pod source"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.852014    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.863109    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.868847    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869729    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.870640    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.871055    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869620    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.872992    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872208    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.874268    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872162    1525 server.go:1264] "Started kubelet"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.876600    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.878390    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.882899    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.888275    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.209.199:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-289800.17cb4242948ce646  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-289800,UID:multinode-289800,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-289800,},FirstTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,LastTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-2
89800,}"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.894478    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.899264    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="200ms"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900556    1525 factory.go:221] Registration of the systemd container factory successfully
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900703    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900931    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.909390    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.922744    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.923300    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961054    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961177    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961311    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962539    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962613    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962649    1525 policy_none.go:49] "None policy: Start"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.965264    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.981258    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.991286    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.994410    1525 state_mem.go:75] "Updated machine memory state"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.001037    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0501 04:17:00.163094    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.005977    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0501 04:17:00.163094    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.012301    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.163154    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.018582    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0501 04:17:00.163154    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.020477    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0501 04:17:00.163202    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.020620    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.163202    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.021548    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-289800\" not found"
	I0501 04:17:00.163287    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022495    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0501 04:17:00.163287    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022690    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0501 04:17:00.163287    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022715    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0501 04:17:00.163335    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.022919    1525 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.028696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.028755    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.045316    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:17:00.163516    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:17:00.163516    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:17:00.163598    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.102048    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="400ms"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.124062    1525 topology_manager.go:215] "Topology Admit Handler" podUID="44d7830a7c97b8c7e460c0508d02be4e" podNamespace="kube-system" podName="kube-scheduler-multinode-289800"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.125237    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8b70cd8d31103a1cfca45e9856766786" podNamespace="kube-system" podName="kube-apiserver-multinode-289800"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.126693    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a17001fd2508d58fea9b1ae465b65254" podNamespace="kube-system" podName="kube-controller-manager-multinode-289800"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.129279    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b12e9024402f49cfac7440d6a2eaf42d" podNamespace="kube-system" podName="etcd-multinode-289800"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132159    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479b3ec741befe4b1eddeb02949bcd198e18fa7dc4c196283e811e273e4edcbd"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132205    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132217    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df6ba73bcf683d21156e67827524b826f94059250b12cf08abd23da8345923a"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132236    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a338ea43bd9b03a0a56c5b614e36fd54cdd707fb4c2f5819a814e4ffd9bdcb65"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.139102    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a1c5b5cdd65332e27f08445a684fc2d2f586ab1b8a2fb2c5c0dfc02b71165"
	I0501 04:17:00.163865    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.158602    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737"
	I0501 04:17:00.163865    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.174190    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bb6a06ed527b42fe74673579e4a788915c66cd3717c52a344c73e0b7d12b34"
	I0501 04:17:00.163920    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.191042    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5"
	I0501 04:17:00.163920    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.208222    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193"
	I0501 04:17:00.164008    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214646    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-ca-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164031    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214710    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-k8s-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164186    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214752    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-kubeconfig\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164238    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214812    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.164238    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214855    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-data\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:17:00.164238    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214875    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d7830a7c97b8c7e460c0508d02be4e-kubeconfig\") pod \"kube-scheduler-multinode-289800\" (UID: \"44d7830a7c97b8c7e460c0508d02be4e\") " pod="kube-system/kube-scheduler-multinode-289800"
	I0501 04:17:00.164346    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214899    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-ca-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.164346    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214925    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-k8s-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.164346    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214950    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-flexvolume-dir\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164466    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214973    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164466    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214994    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-certs\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.222614    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.223837    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.227891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.504248    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="800ms"
	I0501 04:17:00.164714    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.625269    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.164714    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.625998    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.164714    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.852634    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164842    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.852740    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164890    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.063749    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164890    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.063859    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164890    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.260487    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93"
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.306204    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="1.6s"
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.357883    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.357983    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.424248    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.165107    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.424377    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.428960    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.431040    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:40 multinode-289800 kubelet[1525]: I0501 04:15:40.032371    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.639150    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.640030    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-289800"
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.642970    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.644297    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.646032    1525 setters.go:580] "Node became not ready" node="multinode-289800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-01T04:15:42Z","lastTransitionTime":"2024-05-01T04:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.869832    1525 apiserver.go:52] "Watching apiserver"
	I0501 04:17:00.165403    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875356    1525 topology_manager.go:215] "Topology Admit Handler" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w9hq"
	I0501 04:17:00.165403    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875613    1525 topology_manager.go:215] "Topology Admit Handler" podUID="aba82e50-b8f8-40b4-b08a-6d045314d6b6" podNamespace="kube-system" podName="kube-proxy-bp9zx"
	I0501 04:17:00.165403    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875753    1525 topology_manager.go:215] "Topology Admit Handler" podUID="0b91b14d-bed3-4889-b193-db53daccd395" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9zrw"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875936    1525 topology_manager.go:215] "Topology Admit Handler" podUID="72ef61d4-4437-40da-86e7-4d7eb386b6de" podNamespace="kube-system" podName="kindnet-vcxkr"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876061    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5" podNamespace="kube-system" podName="storage-provisioner"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876192    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.876527    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.877384    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:17:00.165670    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.878678    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:17:00.165670    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.879595    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.165753    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.884364    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.165753    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.910944    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0501 04:17:00.165796    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.938877    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-xtables-lock\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:17:00.165796    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939029    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8d2a827-d9a6-419a-a076-c7695a16a2b5-tmp\") pod \"storage-provisioner\" (UID: \"b8d2a827-d9a6-419a-a076-c7695a16a2b5\") " pod="kube-system/storage-provisioner"
	I0501 04:17:00.165796    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939149    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-xtables-lock\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:17:00.165930    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-cni-cfg\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:17:00.166010    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939318    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-lib-modules\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:17:00.166010    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-lib-modules\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:17:00.166075    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940207    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166119    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940401    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440364296 +0000 UTC m=+6.726863016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166119    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940680    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166119    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940822    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440808324 +0000 UTC m=+6.727307144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.948736    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-289800"
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.958916    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975690    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975737    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166348    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.475811436 +0000 UTC m=+6.762310156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166348    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.052812    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17e9f88f256f5527a6565eb2da75f63" path="/var/lib/kubelet/pods/c17e9f88f256f5527a6565eb2da75f63/volumes"
	I0501 04:17:00.166348    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.054400    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7b6f2a7c826774b66af910f598e965" path="/var/lib/kubelet/pods/fc7b6f2a7c826774b66af910f598e965/volumes"
	I0501 04:17:00.166467    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170146    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-289800" podStartSLOduration=1.170112215 podStartE2EDuration="1.170112215s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.140058816 +0000 UTC m=+6.426557536" watchObservedRunningTime="2024-05-01 04:15:43.170112215 +0000 UTC m=+6.456610935"
	I0501 04:17:00.166467    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170304    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-289800" podStartSLOduration=1.170298327 podStartE2EDuration="1.170298327s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.16893474 +0000 UTC m=+6.455433460" watchObservedRunningTime="2024-05-01 04:15:43.170298327 +0000 UTC m=+6.456797147"
	I0501 04:17:00.166467    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444132    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166574    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444229    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444209637 +0000 UTC m=+7.730708457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166574    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444591    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166574    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444633    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444622763 +0000 UTC m=+7.731121483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166726    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.544921    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166726    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545047    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166812    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545141    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.545110913 +0000 UTC m=+7.831609633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166851    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.039213    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f"
	I0501 04:17:00.166851    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.378804    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7"
	I0501 04:17:00.166851    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.401946    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402229    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402476    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.403391    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454688    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167068    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454983    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.454902809 +0000 UTC m=+9.741401629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167068    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455515    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167068    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455560    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.45554895 +0000 UTC m=+9.742047670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167194    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555732    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167194    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555836    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167194    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555920    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.55587479 +0000 UTC m=+9.842373510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167326    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028227    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.167326    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028491    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.167392    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.023829    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.167392    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486637    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167432    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486963    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.486942526 +0000 UTC m=+13.773441346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167432    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.488686    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167572    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.489077    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.488847647 +0000 UTC m=+13.775346467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167572    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587833    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167572    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587977    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167653    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.588185    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.588160623 +0000 UTC m=+13.874659443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167716    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.027084    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.167716    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.028397    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:48 multinode-289800 kubelet[1525]: E0501 04:15:48.022969    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.024347    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.025248    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.024175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.167950    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523387    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167950    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523508    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.523488538 +0000 UTC m=+21.809987358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167950    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524104    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.168079    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524150    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.524137716 +0000 UTC m=+21.810636436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.168079    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.624897    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168079    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625357    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168171    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625742    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.625719971 +0000 UTC m=+21.912218691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168215    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024464    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168215    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024959    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168215    4352 command_runner.go:130] > May 01 04:15:52 multinode-289800 kubelet[1525]: E0501 04:15:52.024016    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168306    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.023669    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168306    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.024381    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168306    4352 command_runner.go:130] > May 01 04:15:54 multinode-289800 kubelet[1525]: E0501 04:15:54.023529    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168433    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.023399    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168433    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.024039    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168433    4352 command_runner.go:130] > May 01 04:15:56 multinode-289800 kubelet[1525]: E0501 04:15:56.023961    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.024583    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.025562    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.024494    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606520    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.168670    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606584    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.606569125 +0000 UTC m=+37.893067945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.168670    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607052    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.168883    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607095    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.607084827 +0000 UTC m=+37.893583547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.168925    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.707959    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168925    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708171    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.169016    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708240    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.708221599 +0000 UTC m=+37.994720419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.169074    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.024158    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169074    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.025055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169131    4352 command_runner.go:130] > May 01 04:16:00 multinode-289800 kubelet[1525]: E0501 04:16:00.023216    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169189    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.024905    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169229    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.025585    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169229    4352 command_runner.go:130] > May 01 04:16:02 multinode-289800 kubelet[1525]: E0501 04:16:02.024143    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169229    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.023409    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169348    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.024062    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169402    4352 command_runner.go:130] > May 01 04:16:04 multinode-289800 kubelet[1525]: E0501 04:16:04.023182    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169441    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.028055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169441    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.029254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169531    4352 command_runner.go:130] > May 01 04:16:06 multinode-289800 kubelet[1525]: E0501 04:16:06.024522    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169587    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.024384    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169587    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.025431    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169652    4352 command_runner.go:130] > May 01 04:16:08 multinode-289800 kubelet[1525]: E0501 04:16:08.024168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169708    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.024117    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169750    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.025560    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169750    4352 command_runner.go:130] > May 01 04:16:10 multinode-289800 kubelet[1525]: E0501 04:16:10.023881    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169750    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.023619    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169843    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.024277    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169843    4352 command_runner.go:130] > May 01 04:16:12 multinode-289800 kubelet[1525]: E0501 04:16:12.024236    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169919    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023153    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169964    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023926    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.170025    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.023335    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.170025    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657138    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.170089    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657461    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.657440103 +0000 UTC m=+69.943938823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.170148    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657218    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.170148    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657858    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.65783162 +0000 UTC m=+69.944330440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.170210    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758303    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.170210    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758421    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.170275    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758487    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.758469083 +0000 UTC m=+70.044967903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.170337    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.023369    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.170398    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.024797    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.170398    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.886834    1525 scope.go:117] "RemoveContainer" containerID="ee2238f98e350e8d80528b60fc5b614ce6048d8b34af2034a9947e26d8e6beab"
	I0501 04:17:00.170460    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.887225    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.887510    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b8d2a827-d9a6-419a-a076-c7695a16a2b5)\"" pod="kube-system/storage-provisioner" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: E0501 04:16:16.024360    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: I0501 04:16:16.618138    1525 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 kubelet[1525]: I0501 04:16:29.024408    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.040204    1525 scope.go:117] "RemoveContainer" containerID="3244d1ee5ab428faf09a962609f2c940c36a998727a01b873d382eb5ee600ca3"
	I0501 04:17:00.170715    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:17:00.170715    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:17:00.170715    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:17:00.170780    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:17:00.170780    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:17:00.170780    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	I0501 04:17:00.170848    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.170848    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.170913    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	I0501 04:17:00.170913    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	I0501 04:17:00.226252    4352 logs.go:123] Gathering logs for kube-apiserver [18cd30f3ad28] ...
	I0501 04:17:00.226252    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd30f3ad28"
	I0501 04:17:00.270142    4352 command_runner.go:130] ! I0501 04:15:39.445795       1 options.go:221] external host was not specified, using 172.28.209.199
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:39.453956       1 server.go:148] Version: v1.30.0
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:39.454357       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:40.258184       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:40.258591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:17:00.271261    4352 command_runner.go:130] ! I0501 04:15:40.260085       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:40.260405       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:40.261810       1 instance.go:299] Using reconciler: lease
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:40.801281       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0501 04:17:00.271337    4352 command_runner.go:130] ! W0501 04:15:40.801386       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:41.090803       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:41.091252       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:41.359171       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.532740       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.570911       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.571018       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.571046       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.571875       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.572053       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.573317       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.574692       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.574726       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.574734       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.576633       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0501 04:17:00.271789    4352 command_runner.go:130] ! W0501 04:15:41.576726       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0501 04:17:00.271789    4352 command_runner.go:130] ! I0501 04:15:41.577645       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0501 04:17:00.271789    4352 command_runner.go:130] ! W0501 04:15:41.577739       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271789    4352 command_runner.go:130] ! W0501 04:15:41.577748       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.271868    4352 command_runner.go:130] ! I0501 04:15:41.578543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0501 04:17:00.271868    4352 command_runner.go:130] ! W0501 04:15:41.578618       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271868    4352 command_runner.go:130] ! W0501 04:15:41.578731       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271942    4352 command_runner.go:130] ! I0501 04:15:41.579623       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0501 04:17:00.271942    4352 command_runner.go:130] ! I0501 04:15:41.582482       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0501 04:17:00.271942    4352 command_runner.go:130] ! W0501 04:15:41.582572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271942    4352 command_runner.go:130] ! W0501 04:15:41.582581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272006    4352 command_runner.go:130] ! I0501 04:15:41.583284       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0501 04:17:00.272034    4352 command_runner.go:130] ! W0501 04:15:41.583417       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272034    4352 command_runner.go:130] ! W0501 04:15:41.583428       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272034    4352 command_runner.go:130] ! I0501 04:15:41.585084       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0501 04:17:00.272034    4352 command_runner.go:130] ! W0501 04:15:41.585203       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0501 04:17:00.272097    4352 command_runner.go:130] ! I0501 04:15:41.588956       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0501 04:17:00.272123    4352 command_runner.go:130] ! W0501 04:15:41.589055       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272123    4352 command_runner.go:130] ! W0501 04:15:41.589067       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272153    4352 command_runner.go:130] ! I0501 04:15:41.589951       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0501 04:17:00.272202    4352 command_runner.go:130] ! W0501 04:15:41.590056       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! W0501 04:15:41.590066       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! I0501 04:15:41.593577       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0501 04:17:00.272232    4352 command_runner.go:130] ! W0501 04:15:41.593674       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! W0501 04:15:41.593684       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! I0501 04:15:41.595694       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0501 04:17:00.272314    4352 command_runner.go:130] ! I0501 04:15:41.597680       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0501 04:17:00.272334    4352 command_runner.go:130] ! W0501 04:15:41.597864       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0501 04:17:00.272334    4352 command_runner.go:130] ! W0501 04:15:41.597875       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272397    4352 command_runner.go:130] ! I0501 04:15:41.603955       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0501 04:17:00.272425    4352 command_runner.go:130] ! W0501 04:15:41.604059       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.604069       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:41.607445       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.607533       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.607543       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:41.608797       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.608817       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:41.625599       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.625618       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.332139       1 secure_serving.go:213] Serving securely on [::]:8443
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.332337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.332595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.333006       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.333577       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.333909       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.334990       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335027       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335107       1 aggregator.go:163] waiting for initial CRD sync...
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335378       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335424       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335517       1 available_controller.go:423] Starting AvailableConditionController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335533       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335556       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.337835       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.338196       1 controller.go:116] Starting legacy_token_tracking_controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.338360       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.338519       1 controller.go:78] Starting OpenAPI AggregationController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.339167       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.339360       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.339853       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.361139       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.361155       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.361192       1 controller.go:139] Starting OpenAPI controller
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361219       1 controller.go:87] Starting OpenAPI V3 controller
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361233       1 naming_controller.go:291] Starting NamingConditionController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361253       1 establishing_controller.go:76] Starting EstablishingController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361274       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361288       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.395816       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.396242       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:17:00.273132    4352 command_runner.go:130] ! I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:17:00.273132    4352 command_runner.go:130] ! I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:17:00.273132    4352 command_runner.go:130] ! I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:17:00.273187    4352 command_runner.go:130] ! I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:17:00.273187    4352 command_runner.go:130] ! I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:17:00.273187    4352 command_runner.go:130] ! I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 04:17:00.273363    4352 command_runner.go:130] ! W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 04:17:00.281481    4352 logs.go:123] Gathering logs for etcd [34892fdb6898] ...
	I0501 04:17:00.281481    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34892fdb6898"
	I0501 04:17:00.311627    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.997417Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:17:00.312604    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998475Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.209.199:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.209.199:2380","--initial-cluster=multinode-289800=https://172.28.209.199:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.209.199:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.209.199:2380","--name=multinode-289800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0501 04:17:00.312604    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998558Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0501 04:17:00.312688    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.998588Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:17:00.312733    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998599Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.209.199:2380"]}
	I0501 04:17:00.312733    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998626Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:17:00.312833    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.006405Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"]}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.007658Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-289800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.030589Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.951987ms"}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.081537Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","commit-index":2020}
	I0501 04:17:00.313075    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=()"}
	I0501 04:17:00.313075    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became follower at term 2"}
	I0501 04:17:00.313075    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fe483b81e7b7d166 [peers: [], term: 2, commit: 2020, applied: 0, lastindex: 2020, lastterm: 2]"}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:39.121672Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.127575Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1352}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.132217Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1744}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.144206Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.15993Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"fe483b81e7b7d166","timeout":"7s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"fe483b81e7b7d166"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160545Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fe483b81e7b7d166","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.16402Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.165851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	I0501 04:17:00.313485    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0501 04:17:00.313485    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0501 04:17:00.321344    4352 logs.go:123] Gathering logs for coredns [b8a9b405d76b] ...
	I0501 04:17:00.321344    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a9b405d76b"
	I0501 04:17:00.350423    4352 command_runner.go:130] > .:53
	I0501 04:17:00.350423    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:17:00.350423    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:17:00.350423    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:17:00.350423    4352 command_runner.go:130] > [INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	I0501 04:17:00.351773    4352 logs.go:123] Gathering logs for kube-proxy [502684407b0c] ...
	I0501 04:17:00.351773    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502684407b0c"
	I0501 04:17:00.379727    4352 command_runner.go:130] ! I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:17:00.382338    4352 command_runner.go:130] ! I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:17:00.384947    4352 logs.go:123] Gathering logs for kindnet [6d5f881ef398] ...
	I0501 04:17:00.384947    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d5f881ef398"
	I0501 04:17:00.415306    4352 command_runner.go:130] ! I0501 04:01:59.122485       1 main.go:227] handling current node
	I0501 04:17:00.415306    4352 command_runner.go:130] ! I0501 04:01:59.122501       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418139    4352 command_runner.go:130] ! I0501 04:01:59.122510       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:01:59.122690       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:01:59.122722       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153658       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153775       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153793       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153946       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161031       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161061       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161079       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161177       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161185       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181653       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181721       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181735       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181742       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.182277       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.182369       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.195902       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196079       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196095       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196105       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196558       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196649       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.209858       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.209973       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.210027       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.210041       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419327    4352 command_runner.go:130] ! I0501 04:02:49.210461       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419327    4352 command_runner.go:130] ! I0501 04:02:49.210617       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219550       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219615       1 main.go:227] handling current node
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219631       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219638       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.220333       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.220436       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.231302       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.232437       1 main.go:227] handling current node
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.232648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.232851       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419566    4352 command_runner.go:130] ! I0501 04:03:09.233578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419621    4352 command_runner.go:130] ! I0501 04:03:09.233631       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419621    4352 command_runner.go:130] ! I0501 04:03:19.245975       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419621    4352 command_runner.go:130] ! I0501 04:03:19.246060       1 main.go:227] handling current node
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246081       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246386       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419718    4352 command_runner.go:130] ! I0501 04:03:29.258941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419718    4352 command_runner.go:130] ! I0501 04:03:29.259020       1 main.go:227] handling current node
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259485       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259520       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.269941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270129       1 main.go:227] handling current node
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270152       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270161       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270403       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419907    4352 command_runner.go:130] ! I0501 04:03:39.270438       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419907    4352 command_runner.go:130] ! I0501 04:03:49.282880       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419907    4352 command_runner.go:130] ! I0501 04:03:49.283025       1 main.go:227] handling current node
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283045       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283054       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283773       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283792       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:59.297110       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297155       1 main.go:227] handling current node
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297169       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297177       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297656       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297688       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.310638       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.311476       1 main.go:227] handling current node
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.311969       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.312340       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.313291       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420178    4352 command_runner.go:130] ! I0501 04:04:09.313332       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420178    4352 command_runner.go:130] ! I0501 04:04:19.324939       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420178    4352 command_runner.go:130] ! I0501 04:04:19.325084       1 main.go:227] handling current node
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.325480       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.325493       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.325923       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.326083       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420299    4352 command_runner.go:130] ! I0501 04:04:29.332468       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420299    4352 command_runner.go:130] ! I0501 04:04:29.332576       1 main.go:227] handling current node
	I0501 04:17:00.420299    4352 command_runner.go:130] ! I0501 04:04:29.332619       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420371    4352 command_runner.go:130] ! I0501 04:04:29.332645       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420371    4352 command_runner.go:130] ! I0501 04:04:29.332818       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420447    4352 command_runner.go:130] ! I0501 04:04:29.332831       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420447    4352 command_runner.go:130] ! I0501 04:04:39.342867       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.342901       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.342914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.342921       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.343433       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.343593       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364771       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364905       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364921       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364930       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.365166       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.365205       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379243       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379352       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379369       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379377       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379531       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379564       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.389743       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390518       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390622       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390636       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390894       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.391049       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.400837       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401285       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401439       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401572       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401956       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.402136       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422040       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422249       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422285       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422311       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422521       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422723       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429807       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429856       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429874       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429881       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.430903       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.431340       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445455       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445594       1 main.go:227] handling current node
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445610       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445619       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:49.445751       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:49.445765       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461135       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461248       1 main.go:227] handling current node
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461264       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461273       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461947       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.462094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:06:09.469509       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:06:09.469615       1 main.go:227] handling current node
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.469636       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.469646       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.470218       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.470387       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:19.486501       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486605       1 main.go:227] handling current node
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486621       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486629       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486864       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486946       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:29.503311       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:29.503476       1 main.go:227] handling current node
	I0501 04:17:00.421392    4352 command_runner.go:130] ! I0501 04:06:29.503492       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421392    4352 command_runner.go:130] ! I0501 04:06:29.503503       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:29.503633       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:29.503843       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528749       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528837       1 main.go:227] handling current node
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528853       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528861       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421548    4352 command_runner.go:130] ! I0501 04:06:39.529235       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421548    4352 command_runner.go:130] ! I0501 04:06:39.529373       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421548    4352 command_runner.go:130] ! I0501 04:06:49.535984       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421590    4352 command_runner.go:130] ! I0501 04:06:49.536067       1 main.go:227] handling current node
	I0501 04:17:00.421590    4352 command_runner.go:130] ! I0501 04:06:49.536082       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421590    4352 command_runner.go:130] ! I0501 04:06:49.536092       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421642    4352 command_runner.go:130] ! I0501 04:06:49.536689       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421642    4352 command_runner.go:130] ! I0501 04:06:49.536802       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421642    4352 command_runner.go:130] ! I0501 04:06:59.550480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421687    4352 command_runner.go:130] ! I0501 04:06:59.551072       1 main.go:227] handling current node
	I0501 04:17:00.421687    4352 command_runner.go:130] ! I0501 04:06:59.551257       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421687    4352 command_runner.go:130] ! I0501 04:06:59.551358       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421731    4352 command_runner.go:130] ! I0501 04:06:59.551696       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421731    4352 command_runner.go:130] ! I0501 04:06:59.551781       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421771    4352 command_runner.go:130] ! I0501 04:07:09.569460       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421771    4352 command_runner.go:130] ! I0501 04:07:09.569627       1 main.go:227] handling current node
	I0501 04:17:00.421832    4352 command_runner.go:130] ! I0501 04:07:09.569642       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421871    4352 command_runner.go:130] ! I0501 04:07:09.569651       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421912    4352 command_runner.go:130] ! I0501 04:07:09.570296       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421912    4352 command_runner.go:130] ! I0501 04:07:09.570434       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.422182    4352 command_runner.go:130] ! I0501 04:07:19.577507       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.422226    4352 command_runner.go:130] ! I0501 04:07:19.577599       1 main.go:227] handling current node
	I0501 04:17:00.422226    4352 command_runner.go:130] ! I0501 04:07:19.577615       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.422226    4352 command_runner.go:130] ! I0501 04:07:19.577730       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.422269    4352 command_runner.go:130] ! I0501 04:07:19.578102       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.422269    4352 command_runner.go:130] ! I0501 04:07:19.578208       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.422269    4352 command_runner.go:130] ! I0501 04:07:29.592703       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.422309    4352 command_runner.go:130] ! I0501 04:07:29.592845       1 main.go:227] handling current node
	I0501 04:17:00.422309    4352 command_runner.go:130] ! I0501 04:07:29.592861       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.427137    4352 command_runner.go:130] ! I0501 04:07:29.592869       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.427137    4352 command_runner.go:130] ! I0501 04:07:29.593139       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.427137    4352 command_runner.go:130] ! I0501 04:07:29.593174       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.427712    4352 command_runner.go:130] ! I0501 04:07:39.602034       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.427771    4352 command_runner.go:130] ! I0501 04:07:39.602064       1 main.go:227] handling current node
	I0501 04:17:00.427771    4352 command_runner.go:130] ! I0501 04:07:39.602077       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:39.602084       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:39.602283       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:39.602300       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:49.837563       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:49.837638       1 main.go:227] handling current node
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:49.837652       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:49.837660       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:49.837875       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:49.837955       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:59.851818       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:59.852109       1 main.go:227] handling current node
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:59.852127       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:07:59.852753       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:07:59.853129       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:07:59.853164       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860338       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860453       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860472       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860482       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860626       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.861316       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877403       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877515       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877530       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877838       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877874       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892899       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892926       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892937       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892944       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.893106       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.893180       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901877       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901929       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901943       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901951       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.902578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.902678       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.918941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919115       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919130       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919139       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919950       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919968       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933101       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933154       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933667       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.934094       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:08:59.934127       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:09:09.948569       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:09:09.948615       1 main.go:227] handling current node
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:09:09.948629       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:09.948637       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:09.949057       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:09.949076       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958099       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958261       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958282       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958294       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958880       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.959055       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975626       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975765       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975790       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.976360       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.976488       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985296       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985455       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985488       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985497       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.986552       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.986590       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.995944       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996021       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996649       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996720       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003190       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003239       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003261       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003479       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003516       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023328       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023430       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023445       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023460       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023613       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023647       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030526       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030616       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030632       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030641       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:20.030856       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:20.030980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038164       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038263       1 main.go:227] handling current node
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038278       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038287       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038931       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430386    4352 command_runner.go:130] ! I0501 04:10:30.039072       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430386    4352 command_runner.go:130] ! I0501 04:10:40.053866       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430428    4352 command_runner.go:130] ! I0501 04:10:40.053915       1 main.go:227] handling current node
	I0501 04:17:00.430428    4352 command_runner.go:130] ! I0501 04:10:40.053929       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430446    4352 command_runner.go:130] ! I0501 04:10:40.053936       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430481    4352 command_runner.go:130] ! I0501 04:10:40.054259       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430481    4352 command_runner.go:130] ! I0501 04:10:40.054295       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430481    4352 command_runner.go:130] ! I0501 04:10:50.066490       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430514    4352 command_runner.go:130] ! I0501 04:10:50.066542       1 main.go:227] handling current node
	I0501 04:17:00.430514    4352 command_runner.go:130] ! I0501 04:10:50.066560       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430565    4352 command_runner.go:130] ! I0501 04:10:50.066567       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430565    4352 command_runner.go:130] ! I0501 04:10:50.067066       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430598    4352 command_runner.go:130] ! I0501 04:10:50.067210       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430598    4352 command_runner.go:130] ! I0501 04:11:00.075901       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430598    4352 command_runner.go:130] ! I0501 04:11:00.076052       1 main.go:227] handling current node
	I0501 04:17:00.430649    4352 command_runner.go:130] ! I0501 04:11:00.076069       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:00.076078       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:10.087907       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:10.088124       1 main.go:227] handling current node
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:10.088140       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430732    4352 command_runner.go:130] ! I0501 04:11:10.088148       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430732    4352 command_runner.go:130] ! I0501 04:11:10.088875       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.430766    4352 command_runner.go:130] ! I0501 04:11:10.088954       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.430766    4352 command_runner.go:130] ! I0501 04:11:10.089178       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:17:00.430766    4352 command_runner.go:130] ! I0501 04:11:20.103399       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430817    4352 command_runner.go:130] ! I0501 04:11:20.103511       1 main.go:227] handling current node
	I0501 04:17:00.430817    4352 command_runner.go:130] ! I0501 04:11:20.103528       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430849    4352 command_runner.go:130] ! I0501 04:11:20.103538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430849    4352 command_runner.go:130] ! I0501 04:11:20.103879       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.430896    4352 command_runner.go:130] ! I0501 04:11:20.103916       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.430896    4352 command_runner.go:130] ! I0501 04:11:30.114473       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430896    4352 command_runner.go:130] ! I0501 04:11:30.115083       1 main.go:227] handling current node
	I0501 04:17:00.430926    4352 command_runner.go:130] ! I0501 04:11:30.115256       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430974    4352 command_runner.go:130] ! I0501 04:11:30.115463       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430974    4352 command_runner.go:130] ! I0501 04:11:30.116474       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.430974    4352 command_runner.go:130] ! I0501 04:11:30.116611       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431007    4352 command_runner.go:130] ! I0501 04:11:40.124324       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431007    4352 command_runner.go:130] ! I0501 04:11:40.124371       1 main.go:227] handling current node
	I0501 04:17:00.431057    4352 command_runner.go:130] ! I0501 04:11:40.124384       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431057    4352 command_runner.go:130] ! I0501 04:11:40.124392       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431090    4352 command_runner.go:130] ! I0501 04:11:40.124558       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431090    4352 command_runner.go:130] ! I0501 04:11:40.124570       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431090    4352 command_runner.go:130] ! I0501 04:11:50.138059       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431137    4352 command_runner.go:130] ! I0501 04:11:50.138102       1 main.go:227] handling current node
	I0501 04:17:00.431137    4352 command_runner.go:130] ! I0501 04:11:50.138116       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431137    4352 command_runner.go:130] ! I0501 04:11:50.138123       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:11:50.138826       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:11:50.138936       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155704       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155799       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155823       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155832       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.156502       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.156549       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164706       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164754       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164767       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164774       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164887       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.165094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.178957       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179142       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179178       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179694       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431757    4352 command_runner.go:130] ! I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431757    4352 command_runner.go:130] ! I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:02.963180    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:17:02.963180    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:02.963180    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:02.963180    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:02.969021    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:17:02.969021    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Audit-Id: c0612144-a145-4879-a876-258e0bbd60ed
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:02.969021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:02.969021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:02 GMT
	I0501 04:17:02.971183    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1973","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 94403 chars]
	I0501 04:17:02.975122    4352 system_pods.go:59] 13 kube-system pods found
	I0501 04:17:02.975122    4352 system_pods.go:61] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "etcd-multinode-289800" [aaf534b6-9f4c-445d-afb9-bd225e1a77fd] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kindnet-4m5vg" [4d06e665-b4c1-40b9-bbb8-c35bfe35385e] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kindnet-gzz7p" [576f33f3-f244-48f0-ae69-30c8f38ed871] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-apiserver-multinode-289800" [0ee77673-e4b3-4fba-a855-ef6876337257] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-proxy-g8mbm" [ef0e1817-6682-4b8f-affa-c10021247006] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-proxy-rlzp8" [b37d8d5d-a7cb-4848-a8a2-11d9761e08d6] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running
	I0501 04:17:02.975122    4352 system_pods.go:74] duration metric: took 3.9379401s to wait for pod list to return data ...
	I0501 04:17:02.975791    4352 default_sa.go:34] waiting for default service account to be created ...
	I0501 04:17:02.975791    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/default/serviceaccounts
	I0501 04:17:02.975791    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:02.975791    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:02.975791    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:02.979648    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:17:02.979648    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:02 GMT
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Audit-Id: 6d591c6d-dd15-4103-bf92-e58d05b6d78b
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:02.979648    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:02.979648    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Content-Length: 262
	I0501 04:17:02.979648    4352 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7dbf8d0-35c5-4373-a233-f0386cee7e97","resourceVersion":"307","creationTimestamp":"2024-05-01T03:52:28Z"}}]}
	I0501 04:17:02.980244    4352 default_sa.go:45] found service account: "default"
	I0501 04:17:02.980244    4352 default_sa.go:55] duration metric: took 4.4528ms for default service account to be created ...
	I0501 04:17:02.980244    4352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 04:17:02.980244    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:17:02.980244    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:02.980803    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:02.980803    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:02.986165    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:17:02.986165    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:02 GMT
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Audit-Id: 30e3890d-b5ac-488d-b5ea-eb7f08c28637
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:02.986680    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:02.986680    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:02.988321    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1973","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 94403 chars]
	I0501 04:17:02.992887    4352 system_pods.go:86] 13 kube-system pods found
	I0501 04:17:02.992887    4352 system_pods.go:89] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "etcd-multinode-289800" [aaf534b6-9f4c-445d-afb9-bd225e1a77fd] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kindnet-4m5vg" [4d06e665-b4c1-40b9-bbb8-c35bfe35385e] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kindnet-gzz7p" [576f33f3-f244-48f0-ae69-30c8f38ed871] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-apiserver-multinode-289800" [0ee77673-e4b3-4fba-a855-ef6876337257] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-proxy-g8mbm" [ef0e1817-6682-4b8f-affa-c10021247006] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-proxy-rlzp8" [b37d8d5d-a7cb-4848-a8a2-11d9761e08d6] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running
	I0501 04:17:02.992887    4352 system_pods.go:126] duration metric: took 12.6433ms to wait for k8s-apps to be running ...
	I0501 04:17:02.992887    4352 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 04:17:03.009569    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 04:17:03.035497    4352 system_svc.go:56] duration metric: took 42.6094ms WaitForService to wait for kubelet
	I0501 04:17:03.035563    4352 kubeadm.go:576] duration metric: took 1m15.1371889s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 04:17:03.035563    4352 node_conditions.go:102] verifying NodePressure condition ...
	I0501 04:17:03.035739    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes
	I0501 04:17:03.035739    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:03.035739    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:03.035739    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:03.043312    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:17:03.043312    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:03 GMT
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Audit-Id: 3cd0513f-9a98-436f-b810-e8270a9db104
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:03.043583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:03.043583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:03.044517    4352 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I0501 04:17:03.045118    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:17:03.045118    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:17:03.045118    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:17:03.045118    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:17:03.045118    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:17:03.045118    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:17:03.045118    4352 node_conditions.go:105] duration metric: took 9.5544ms to run NodePressure ...
	I0501 04:17:03.045118    4352 start.go:240] waiting for startup goroutines ...
	I0501 04:17:03.045118    4352 start.go:245] waiting for cluster config update ...
	I0501 04:17:03.045118    4352 start.go:254] writing updated cluster config ...
	I0501 04:17:03.048946    4352 out.go:177] 
	I0501 04:17:03.064002    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:17:03.064992    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:17:03.069994    4352 out.go:177] * Starting "multinode-289800-m02" worker node in "multinode-289800" cluster
	I0501 04:17:03.072962    4352 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:17:03.072962    4352 cache.go:56] Caching tarball of preloaded images
	I0501 04:17:03.073959    4352 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 04:17:03.073959    4352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 04:17:03.073959    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:17:03.075948    4352 start.go:360] acquireMachinesLock for multinode-289800-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:17:03.075948    4352 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-289800-m02"
	I0501 04:17:03.076949    4352 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:17:03.076949    4352 fix.go:54] fixHost starting: m02
	I0501 04:17:03.076949    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:05.266538    4352 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 04:17:05.266538    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:05.266661    4352 fix.go:112] recreateIfNeeded on multinode-289800-m02: state=Stopped err=<nil>
	W0501 04:17:05.266661    4352 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:17:05.272891    4352 out.go:177] * Restarting existing hyperv VM for "multinode-289800-m02" ...
	I0501 04:17:05.274991    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-289800-m02
	I0501 04:17:08.356496    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:08.356541    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:08.356541    4352 main.go:141] libmachine: Waiting for host to start...
	I0501 04:17:08.356772    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:10.621715    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:10.622481    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:10.622481    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:13.137660    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:13.137660    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:14.150865    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:16.373399    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:16.373521    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:16.373521    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:18.956162    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:18.956208    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:19.968527    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:22.203157    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:22.203443    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:22.203443    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:24.783022    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:24.783370    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:25.784603    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:27.977192    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:27.977192    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:27.977192    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:30.528434    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:30.528434    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:31.528947    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:33.692945    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:33.692945    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:33.693188    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:36.307423    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:36.307423    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:36.310413    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:38.423733    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:38.424218    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:38.424218    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:41.027609    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:41.027885    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:41.027885    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:17:41.030463    4352 machine.go:94] provisionDockerMachine start ...
	I0501 04:17:41.030463    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:43.178682    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:43.178682    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:43.179526    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:45.755095    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:45.755095    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:45.766496    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:17:45.767074    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:17:45.767074    4352 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:17:45.889237    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:17:45.889237    4352 buildroot.go:166] provisioning hostname "multinode-289800-m02"
	I0501 04:17:45.889237    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:47.993353    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:47.994359    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:47.994359    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:50.632507    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:50.632507    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:50.638929    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:17:50.638929    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:17:50.639461    4352 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-289800-m02 && echo "multinode-289800-m02" | sudo tee /etc/hostname
	I0501 04:17:50.804381    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-289800-m02
	
	I0501 04:17:50.804455    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:52.999660    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:52.999660    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:52.999660    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:55.607366    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:55.607366    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:55.614179    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:17:55.614851    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:17:55.614851    4352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-289800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-289800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-289800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:17:55.774126    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:17:55.774217    4352 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:17:55.774289    4352 buildroot.go:174] setting up certificates
	I0501 04:17:55.774289    4352 provision.go:84] configureAuth start
	I0501 04:17:55.774289    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:57.918796    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:57.919487    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:57.919487    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:00.502473    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:00.502829    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:00.502892    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:02.590366    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:02.591070    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:02.591070    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:05.148233    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:05.148889    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:05.148889    4352 provision.go:143] copyHostCerts
	I0501 04:18:05.148975    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 04:18:05.149285    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 04:18:05.149285    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 04:18:05.149285    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 04:18:05.150759    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 04:18:05.150843    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 04:18:05.150843    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 04:18:05.151871    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 04:18:05.152603    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 04:18:05.153167    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 04:18:05.153167    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 04:18:05.153457    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 04:18:05.154280    4352 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-289800-m02 san=[127.0.0.1 172.28.222.62 localhost minikube multinode-289800-m02]
	I0501 04:18:05.311191    4352 provision.go:177] copyRemoteCerts
	I0501 04:18:05.326386    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:18:05.326386    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:07.459388    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:07.459388    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:07.459610    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:10.046688    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:10.046688    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:10.047450    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:10.153058    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8266358s)
	I0501 04:18:10.153058    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 04:18:10.153702    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0501 04:18:10.208931    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 04:18:10.209465    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 04:18:10.262414    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 04:18:10.262974    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:18:10.313758    4352 provision.go:87] duration metric: took 14.5392904s to configureAuth
	I0501 04:18:10.313758    4352 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:18:10.314419    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:18:10.314419    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:12.463097    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:12.463386    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:12.463530    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:15.036197    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:15.037213    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:15.044286    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:15.045018    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:15.045018    4352 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 04:18:15.170285    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 04:18:15.170285    4352 buildroot.go:70] root file system type: tmpfs
	I0501 04:18:15.170829    4352 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 04:18:15.170960    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:17.282755    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:17.282755    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:17.282850    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:19.942009    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:19.942009    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:19.948397    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:19.948883    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:19.949170    4352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.209.199"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 04:18:20.116815    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.209.199
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 04:18:20.116815    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:22.267186    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:22.267186    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:22.267458    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:24.848863    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:24.849092    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:24.857212    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:24.858312    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:24.858312    4352 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 04:18:27.354933    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 04:18:27.355111    4352 machine.go:97] duration metric: took 46.3243011s to provisionDockerMachine
	I0501 04:18:27.355111    4352 start.go:293] postStartSetup for "multinode-289800-m02" (driver="hyperv")
	I0501 04:18:27.355193    4352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:18:27.369117    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:18:27.369117    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:29.459227    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:29.459227    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:29.459227    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:32.049572    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:32.049572    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:32.050853    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:32.164993    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7958105s)
	I0501 04:18:32.180202    4352 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:18:32.189844    4352 command_runner.go:130] > NAME=Buildroot
	I0501 04:18:32.189844    4352 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 04:18:32.189844    4352 command_runner.go:130] > ID=buildroot
	I0501 04:18:32.189844    4352 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 04:18:32.189844    4352 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 04:18:32.189844    4352 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:18:32.189844    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:18:32.190501    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:18:32.191295    4352 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:18:32.191433    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 04:18:32.205615    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:18:32.224717    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:18:32.279750    4352 start.go:296] duration metric: took 4.9246015s for postStartSetup
	I0501 04:18:32.279750    4352 fix.go:56] duration metric: took 1m29.2021323s for fixHost
	I0501 04:18:32.279750    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:34.339829    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:34.340734    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:34.340734    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:36.859295    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:36.859526    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:36.867040    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:36.867995    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:36.867995    4352 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 04:18:37.005661    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714537117.002132397
	
	I0501 04:18:37.005661    4352 fix.go:216] guest clock: 1714537117.002132397
	I0501 04:18:37.005661    4352 fix.go:229] Guest: 2024-05-01 04:18:37.002132397 +0000 UTC Remote: 2024-05-01 04:18:32.2797503 +0000 UTC m=+301.181982701 (delta=4.722382097s)
	I0501 04:18:37.005761    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:39.121420    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:39.121420    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:39.121420    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:41.677809    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:41.677809    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:41.685063    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:41.685802    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:41.686407    4352 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714537117
	I0501 04:18:41.824590    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:18:37 UTC 2024
	
	I0501 04:18:41.824787    4352 fix.go:236] clock set: Wed May  1 04:18:37 UTC 2024
	 (err=<nil>)
	I0501 04:18:41.824787    4352 start.go:83] releasing machines lock for "multinode-289800-m02", held for 1m38.7480982s
	I0501 04:18:41.825001    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:43.920486    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:43.920486    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:43.920486    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:46.459348    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:46.459592    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:46.462543    4352 out.go:177] * Found network options:
	I0501 04:18:46.465450    4352 out.go:177]   - NO_PROXY=172.28.209.199
	W0501 04:18:46.467850    4352 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 04:18:46.470364    4352 out.go:177]   - NO_PROXY=172.28.209.199
	W0501 04:18:46.472962    4352 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 04:18:46.474858    4352 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 04:18:46.477481    4352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:18:46.477481    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:46.492157    4352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 04:18:46.492157    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:48.655157    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:48.655436    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:48.655436    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:48.659774    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:48.659774    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:48.659774    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:51.367604    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:51.367604    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:51.371211    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:51.411664    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:51.411707    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:51.411762    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:51.573801    4352 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 04:18:51.573801    4352 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0501 04:18:51.573801    4352 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0962825s)
	I0501 04:18:51.573801    4352 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0816061s)
	W0501 04:18:51.573929    4352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:18:51.593046    4352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:18:51.625364    4352 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0501 04:18:51.626023    4352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:18:51.626132    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:18:51.626337    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:18:51.675610    4352 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 04:18:51.693907    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 04:18:51.730463    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:18:51.750468    4352 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:18:51.765473    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:18:51.803530    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:18:51.840574    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:18:51.882587    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:18:51.924142    4352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:18:51.964422    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:18:52.005876    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:18:52.049028    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:18:52.091544    4352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:18:52.114703    4352 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 04:18:52.129876    4352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:18:52.167809    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:18:52.396374    4352 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:18:52.435618    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:18:52.449268    4352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:18:52.473108    4352 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 04:18:52.473108    4352 command_runner.go:130] > [Unit]
	I0501 04:18:52.473226    4352 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 04:18:52.473226    4352 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 04:18:52.473296    4352 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 04:18:52.473296    4352 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 04:18:52.473296    4352 command_runner.go:130] > StartLimitBurst=3
	I0501 04:18:52.473296    4352 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 04:18:52.473296    4352 command_runner.go:130] > [Service]
	I0501 04:18:52.473296    4352 command_runner.go:130] > Type=notify
	I0501 04:18:52.473296    4352 command_runner.go:130] > Restart=on-failure
	I0501 04:18:52.473368    4352 command_runner.go:130] > Environment=NO_PROXY=172.28.209.199
	I0501 04:18:52.473368    4352 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 04:18:52.473398    4352 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 04:18:52.473447    4352 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 04:18:52.473473    4352 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 04:18:52.473473    4352 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 04:18:52.473540    4352 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 04:18:52.473540    4352 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 04:18:52.473608    4352 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 04:18:52.473638    4352 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 04:18:52.473638    4352 command_runner.go:130] > ExecStart=
	I0501 04:18:52.473638    4352 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 04:18:52.473694    4352 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 04:18:52.473720    4352 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 04:18:52.473720    4352 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 04:18:52.473720    4352 command_runner.go:130] > LimitNOFILE=infinity
	I0501 04:18:52.473720    4352 command_runner.go:130] > LimitNPROC=infinity
	I0501 04:18:52.473720    4352 command_runner.go:130] > LimitCORE=infinity
	I0501 04:18:52.473720    4352 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 04:18:52.473720    4352 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 04:18:52.473775    4352 command_runner.go:130] > TasksMax=infinity
	I0501 04:18:52.473775    4352 command_runner.go:130] > TimeoutStartSec=0
	I0501 04:18:52.473775    4352 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 04:18:52.473802    4352 command_runner.go:130] > Delegate=yes
	I0501 04:18:52.473802    4352 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 04:18:52.473802    4352 command_runner.go:130] > KillMode=process
	I0501 04:18:52.473802    4352 command_runner.go:130] > [Install]
	I0501 04:18:52.473802    4352 command_runner.go:130] > WantedBy=multi-user.target
	I0501 04:18:52.486804    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:18:52.523439    4352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:18:52.573111    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:18:52.615120    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:18:52.660845    4352 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 04:18:52.723455    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:18:52.751408    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:18:52.793011    4352 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 04:18:52.806591    4352 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:18:52.812592    4352 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 04:18:52.826322    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:18:52.848919    4352 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:18:52.898955    4352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:18:53.113927    4352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:18:53.313445    4352 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:18:53.313510    4352 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:18:53.365106    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:18:53.575107    4352 ssh_runner.go:195] Run: sudo systemctl restart docker

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-289800" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-289800
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-289800: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-289800" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-289800	172.28.209.152
multinode-289800-m02	172.28.219.162
multinode-289800-m03	172.28.223.145

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-289800 -n multinode-289800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-289800 -n multinode-289800: (12.286881s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 logs -n 25: (11.583402s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-289800 cp testdata\cp-test.txt                                                                                 | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:03 UTC | 01 May 24 04:03 UTC |
	|         | multinode-289800-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:03 UTC | 01 May 24 04:04 UTC |
	|         | multinode-289800-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:04 UTC | 01 May 24 04:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:04 UTC | 01 May 24 04:04 UTC |
	|         | multinode-289800-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:04 UTC | 01 May 24 04:04 UTC |
	|         | multinode-289800:/home/docker/cp-test_multinode-289800-m02_multinode-289800.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:04 UTC | 01 May 24 04:04 UTC |
	|         | multinode-289800-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n multinode-289800 sudo cat                                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:04 UTC | 01 May 24 04:05 UTC |
	|         | /home/docker/cp-test_multinode-289800-m02_multinode-289800.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:05 UTC | 01 May 24 04:05 UTC |
	|         | multinode-289800-m03:/home/docker/cp-test_multinode-289800-m02_multinode-289800-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:05 UTC | 01 May 24 04:05 UTC |
	|         | multinode-289800-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n multinode-289800-m03 sudo cat                                                                    | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:05 UTC | 01 May 24 04:05 UTC |
	|         | /home/docker/cp-test_multinode-289800-m02_multinode-289800-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp testdata\cp-test.txt                                                                                 | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:05 UTC | 01 May 24 04:05 UTC |
	|         | multinode-289800-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:05 UTC | 01 May 24 04:05 UTC |
	|         | multinode-289800-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:05 UTC | 01 May 24 04:06 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:06 UTC | 01 May 24 04:06 UTC |
	|         | multinode-289800-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:06 UTC | 01 May 24 04:06 UTC |
	|         | multinode-289800:/home/docker/cp-test_multinode-289800-m03_multinode-289800.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:06 UTC | 01 May 24 04:06 UTC |
	|         | multinode-289800-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n multinode-289800 sudo cat                                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:06 UTC | 01 May 24 04:06 UTC |
	|         | /home/docker/cp-test_multinode-289800-m03_multinode-289800.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt                                                        | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:06 UTC | 01 May 24 04:07 UTC |
	|         | multinode-289800-m02:/home/docker/cp-test_multinode-289800-m03_multinode-289800-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n                                                                                                  | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:07 UTC | 01 May 24 04:07 UTC |
	|         | multinode-289800-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-289800 ssh -n multinode-289800-m02 sudo cat                                                                    | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:07 UTC | 01 May 24 04:07 UTC |
	|         | /home/docker/cp-test_multinode-289800-m03_multinode-289800-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-289800 node stop m03                                                                                           | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:07 UTC | 01 May 24 04:07 UTC |
	| node    | multinode-289800 node start                                                                                              | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:08 UTC | 01 May 24 04:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-289800                                                                                                 | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:11 UTC |                     |
	| stop    | -p multinode-289800                                                                                                      | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:11 UTC | 01 May 24 04:13 UTC |
	| start   | -p multinode-289800                                                                                                      | multinode-289800 | minikube6\jenkins | v1.33.0 | 01 May 24 04:13 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 04:13:31
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 04:13:31.288320    4352 out.go:291] Setting OutFile to fd 940 ...
	I0501 04:13:31.288947    4352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:13:31.289022    4352 out.go:304] Setting ErrFile to fd 872...
	I0501 04:13:31.289022    4352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:13:31.317764    4352 out.go:298] Setting JSON to false
	I0501 04:13:31.321501    4352 start.go:129] hostinfo: {"hostname":"minikube6","uptime":109865,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 04:13:31.321501    4352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 04:13:31.486610    4352 out.go:177] * [multinode-289800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 04:13:31.500668    4352 notify.go:220] Checking for updates...
	I0501 04:13:31.647903    4352 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:13:31.864863    4352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:13:32.043046    4352 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 04:13:32.130520    4352 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:13:32.227582    4352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:13:32.391630    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:13:32.391885    4352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:13:37.854108    4352 out.go:177] * Using the hyperv driver based on existing profile
	I0501 04:13:37.857331    4352 start.go:297] selected driver: hyperv
	I0501 04:13:37.857446    4352 start.go:901] validating driver "hyperv" against &{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:13:37.857707    4352 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:13:37.924974    4352 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 04:13:37.925065    4352 cni.go:84] Creating CNI manager for ""
	I0501 04:13:37.925065    4352 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 04:13:37.925303    4352 start.go:340] cluster config:
	{Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.152 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:13:37.925717    4352 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:13:37.937898    4352 out.go:177] * Starting "multinode-289800" primary control-plane node in "multinode-289800" cluster
	I0501 04:13:37.942400    4352 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:13:37.943382    4352 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 04:13:37.943482    4352 cache.go:56] Caching tarball of preloaded images
	I0501 04:13:37.943655    4352 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 04:13:37.944011    4352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 04:13:37.944211    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:13:37.947189    4352 start.go:360] acquireMachinesLock for multinode-289800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:13:37.947418    4352 start.go:364] duration metric: took 229.5µs to acquireMachinesLock for "multinode-289800"
	I0501 04:13:37.947418    4352 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:13:37.947418    4352 fix.go:54] fixHost starting: 
	I0501 04:13:37.948120    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:40.670202    4352 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 04:13:40.670771    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:40.670771    4352 fix.go:112] recreateIfNeeded on multinode-289800: state=Stopped err=<nil>
	W0501 04:13:40.670942    4352 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:13:40.678157    4352 out.go:177] * Restarting existing hyperv VM for "multinode-289800" ...
	I0501 04:13:40.681664    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-289800
	I0501 04:13:43.752436    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:43.752436    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:43.752436    4352 main.go:141] libmachine: Waiting for host to start...
	I0501 04:13:43.752538    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:45.940331    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:13:45.940331    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:45.940433    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:13:48.396560    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:48.396560    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:49.407903    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:51.581304    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:13:51.581480    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:51.581575    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:13:54.138280    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:54.138280    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:55.145649    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:13:57.281580    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:13:57.282165    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:13:57.282290    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:13:59.773215    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:13:59.773215    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:00.787459    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:02.974363    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:02.974363    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:02.974363    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:05.527451    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:14:05.527451    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:06.536170    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:08.686994    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:08.687999    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:08.688119    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:11.254131    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:11.254131    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:11.257032    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:13.353414    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:13.354024    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:13.354024    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:15.869222    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:15.869222    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:15.869705    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:14:15.872177    4352 machine.go:94] provisionDockerMachine start ...
	I0501 04:14:15.872390    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:17.976735    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:17.976838    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:17.976838    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:20.550671    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:20.550671    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:20.557921    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:20.558543    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:20.558708    4352 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:14:20.688461    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:14:20.688525    4352 buildroot.go:166] provisioning hostname "multinode-289800"
	I0501 04:14:20.688588    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:22.841376    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:22.841376    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:22.841376    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:25.366118    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:25.366118    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:25.372321    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:25.372682    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:25.372819    4352 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-289800 && echo "multinode-289800" | sudo tee /etc/hostname
	I0501 04:14:25.534851    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-289800
	
	I0501 04:14:25.535124    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:27.621237    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:27.621410    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:27.621495    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:30.206576    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:30.206576    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:30.214870    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:30.215449    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:30.215449    4352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-289800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-289800/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-289800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:14:30.374292    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:14:30.374292    4352 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:14:30.374292    4352 buildroot.go:174] setting up certificates
	I0501 04:14:30.374292    4352 provision.go:84] configureAuth start
	I0501 04:14:30.374292    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:32.472085    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:32.472385    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:32.472385    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:34.988029    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:34.988029    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:34.988541    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:37.075640    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:37.075640    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:37.075810    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:39.576995    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:39.577255    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:39.577255    4352 provision.go:143] copyHostCerts
	I0501 04:14:39.577255    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 04:14:39.577255    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 04:14:39.577255    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 04:14:39.577853    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 04:14:39.579132    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 04:14:39.579491    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 04:14:39.579491    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 04:14:39.579491    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 04:14:39.580823    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 04:14:39.580823    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 04:14:39.580823    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 04:14:39.581410    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 04:14:39.582360    4352 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-289800 san=[127.0.0.1 172.28.209.199 localhost minikube multinode-289800]
	I0501 04:14:39.718225    4352 provision.go:177] copyRemoteCerts
	I0501 04:14:39.731115    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:14:39.731115    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:41.855991    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:41.856471    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:41.856471    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:44.416880    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:44.416880    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:44.418136    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:14:44.535525    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8043742s)
	I0501 04:14:44.535525    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 04:14:44.536479    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:14:44.588410    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 04:14:44.588497    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0501 04:14:44.640732    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 04:14:44.641009    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 04:14:44.692089    4352 provision.go:87] duration metric: took 14.3176884s to configureAuth
	I0501 04:14:44.692089    4352 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:14:44.692366    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:14:44.692366    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:46.768804    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:46.768804    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:46.768907    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:49.299376    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:49.299992    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:49.306589    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:49.306745    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:49.306745    4352 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 04:14:49.450631    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 04:14:49.450934    4352 buildroot.go:70] root file system type: tmpfs
	I0501 04:14:49.451237    4352 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 04:14:49.451237    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:51.572015    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:51.572132    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:51.572455    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:54.196490    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:54.196490    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:54.202599    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:54.203382    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:54.203382    4352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 04:14:54.381919    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 04:14:54.382458    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:14:56.475679    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:14:56.475679    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:56.475679    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:14:59.008395    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:14:59.008395    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:14:59.014390    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:14:59.014390    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:14:59.014390    4352 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 04:15:01.616721    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 04:15:01.616721    4352 machine.go:97] duration metric: took 45.744108s to provisionDockerMachine
	I0501 04:15:01.616721    4352 start.go:293] postStartSetup for "multinode-289800" (driver="hyperv")
	I0501 04:15:01.616721    4352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:15:01.631485    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:15:01.631485    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:03.734156    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:03.734250    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:03.734250    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:06.289808    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:06.296300    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:06.297326    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:15:06.408676    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7771539s)
	I0501 04:15:06.426553    4352 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:15:06.436535    4352 command_runner.go:130] > NAME=Buildroot
	I0501 04:15:06.436535    4352 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 04:15:06.436535    4352 command_runner.go:130] > ID=buildroot
	I0501 04:15:06.436535    4352 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 04:15:06.436535    4352 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 04:15:06.436688    4352 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:15:06.436688    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:15:06.437006    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:15:06.437786    4352 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:15:06.437786    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 04:15:06.453838    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:15:06.476226    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:15:06.526513    4352 start.go:296] duration metric: took 4.9097546s for postStartSetup
	I0501 04:15:06.526734    4352 fix.go:56] duration metric: took 1m28.5786431s for fixHost
	I0501 04:15:06.526734    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:08.628233    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:08.628233    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:08.628233    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:11.200675    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:11.200675    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:11.207510    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:15:11.207814    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:15:11.207814    4352 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 04:15:11.350053    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714536911.337265550
	
	I0501 04:15:11.350053    4352 fix.go:216] guest clock: 1714536911.337265550
	I0501 04:15:11.350053    4352 fix.go:229] Guest: 2024-05-01 04:15:11.33726555 +0000 UTC Remote: 2024-05-01 04:15:06.5267349 +0000 UTC m=+95.430511901 (delta=4.81053065s)
	I0501 04:15:11.350168    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:13.448320    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:13.448320    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:13.448626    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:15.947081    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:15.947829    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:15.955347    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:15:15.956092    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.209.199 22 <nil> <nil>}
	I0501 04:15:15.956092    4352 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714536911
	I0501 04:15:16.107631    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:15:11 UTC 2024
	
	I0501 04:15:16.107631    4352 fix.go:236] clock set: Wed May  1 04:15:11 UTC 2024
	 (err=<nil>)
	I0501 04:15:16.107631    4352 start.go:83] releasing machines lock for "multinode-289800", held for 1m38.1594665s
	I0501 04:15:16.108173    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:18.200936    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:18.201521    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:18.201521    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:20.731957    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:20.731957    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:20.736394    4352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:15:20.736928    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:20.748881    4352 ssh_runner.go:195] Run: cat /version.json
	I0501 04:15:20.748881    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:15:22.934696    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:22.934696    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:22.935403    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:22.963657    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:15:22.963657    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:22.964039    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:15:25.608268    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:25.608268    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:25.609188    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:15:25.636508    4352 main.go:141] libmachine: [stdout =====>] : 172.28.209.199
	
	I0501 04:15:25.636508    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:15:25.636508    4352 sshutil.go:53] new ssh client: &{IP:172.28.209.199 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:15:25.714513    4352 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0501 04:15:25.714726    4352 ssh_runner.go:235] Completed: cat /version.json: (4.9657508s)
	I0501 04:15:25.730428    4352 ssh_runner.go:195] Run: systemctl --version
	I0501 04:15:25.793949    4352 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 04:15:25.794001    4352 command_runner.go:130] > systemd 252 (252)
	I0501 04:15:25.794001    4352 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0501 04:15:25.794001    4352 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0575698s)
	I0501 04:15:25.808805    4352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 04:15:25.817742    4352 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0501 04:15:25.818374    4352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:15:25.832513    4352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:15:25.863279    4352 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0501 04:15:25.863947    4352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:15:25.863947    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:15:25.863947    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:15:25.902209    4352 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 04:15:25.915429    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 04:15:25.950406    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:15:25.971423    4352 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:15:25.985607    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:15:26.021090    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:15:26.056538    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:15:26.091668    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:15:26.126978    4352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:15:26.160769    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:15:26.196167    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:15:26.231301    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:15:26.268795    4352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:15:26.288239    4352 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 04:15:26.302228    4352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:15:26.335892    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:26.546990    4352 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:15:26.581553    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:15:26.595536    4352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:15:26.622168    4352 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 04:15:26.622317    4352 command_runner.go:130] > [Unit]
	I0501 04:15:26.622317    4352 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 04:15:26.622317    4352 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 04:15:26.622317    4352 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 04:15:26.622317    4352 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 04:15:26.622389    4352 command_runner.go:130] > StartLimitBurst=3
	I0501 04:15:26.622389    4352 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 04:15:26.622389    4352 command_runner.go:130] > [Service]
	I0501 04:15:26.622389    4352 command_runner.go:130] > Type=notify
	I0501 04:15:26.622389    4352 command_runner.go:130] > Restart=on-failure
	I0501 04:15:26.622444    4352 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 04:15:26.622444    4352 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 04:15:26.622444    4352 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 04:15:26.622490    4352 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 04:15:26.622490    4352 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 04:15:26.622490    4352 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 04:15:26.622490    4352 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 04:15:26.622553    4352 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 04:15:26.622553    4352 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 04:15:26.622553    4352 command_runner.go:130] > ExecStart=
	I0501 04:15:26.622651    4352 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 04:15:26.622651    4352 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 04:15:26.622651    4352 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 04:15:26.622721    4352 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 04:15:26.622721    4352 command_runner.go:130] > LimitNOFILE=infinity
	I0501 04:15:26.622721    4352 command_runner.go:130] > LimitNPROC=infinity
	I0501 04:15:26.622721    4352 command_runner.go:130] > LimitCORE=infinity
	I0501 04:15:26.622721    4352 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 04:15:26.622721    4352 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 04:15:26.622781    4352 command_runner.go:130] > TasksMax=infinity
	I0501 04:15:26.622781    4352 command_runner.go:130] > TimeoutStartSec=0
	I0501 04:15:26.622781    4352 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 04:15:26.622781    4352 command_runner.go:130] > Delegate=yes
	I0501 04:15:26.622781    4352 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 04:15:26.622833    4352 command_runner.go:130] > KillMode=process
	I0501 04:15:26.622833    4352 command_runner.go:130] > [Install]
	I0501 04:15:26.622833    4352 command_runner.go:130] > WantedBy=multi-user.target
	I0501 04:15:26.637102    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:15:26.672868    4352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:15:26.719884    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:15:26.761043    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:15:26.801622    4352 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 04:15:26.865354    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:15:26.892052    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:15:26.928130    4352 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 04:15:26.943045    4352 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:15:26.949649    4352 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 04:15:26.964818    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:15:26.985039    4352 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:15:27.034241    4352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:15:27.252882    4352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:15:27.457917    4352 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:15:27.458072    4352 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:15:27.511496    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:27.734212    4352 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 04:15:30.421940    4352 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6877079s)
	I0501 04:15:30.435945    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0501 04:15:30.476284    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 04:15:30.521712    4352 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0501 04:15:30.745880    4352 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0501 04:15:30.955633    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:31.163514    4352 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0501 04:15:31.208353    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0501 04:15:31.247906    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:31.465061    4352 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0501 04:15:31.581899    4352 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0501 04:15:31.594899    4352 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0501 04:15:31.604023    4352 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0501 04:15:31.604023    4352 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0501 04:15:31.604161    4352 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0501 04:15:31.604161    4352 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0501 04:15:31.604161    4352 command_runner.go:130] > Access: 2024-05-01 04:15:31.494988090 +0000
	I0501 04:15:31.604161    4352 command_runner.go:130] > Modify: 2024-05-01 04:15:31.494988090 +0000
	I0501 04:15:31.604161    4352 command_runner.go:130] > Change: 2024-05-01 04:15:31.498988343 +0000
	I0501 04:15:31.604161    4352 command_runner.go:130] >  Birth: -
	I0501 04:15:31.604225    4352 start.go:562] Will wait 60s for crictl version
	I0501 04:15:31.618391    4352 ssh_runner.go:195] Run: which crictl
	I0501 04:15:31.623995    4352 command_runner.go:130] > /usr/bin/crictl
	I0501 04:15:31.637625    4352 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 04:15:31.693291    4352 command_runner.go:130] > Version:  0.1.0
	I0501 04:15:31.693331    4352 command_runner.go:130] > RuntimeName:  docker
	I0501 04:15:31.693331    4352 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0501 04:15:31.693331    4352 command_runner.go:130] > RuntimeApiVersion:  v1
	I0501 04:15:31.693409    4352 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0501 04:15:31.704186    4352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 04:15:31.736665    4352 command_runner.go:130] > 26.0.2
	I0501 04:15:31.748202    4352 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0501 04:15:31.778482    4352 command_runner.go:130] > 26.0.2
	I0501 04:15:31.782570    4352 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0501 04:15:31.782791    4352 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0501 04:15:31.787351    4352 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0501 04:15:31.787399    4352 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0501 04:15:31.787399    4352 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0501 04:15:31.787399    4352 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:88:d7:f1 Flags:up|broadcast|multicast|running}
	I0501 04:15:31.790168    4352 ip.go:210] interface addr: fe80::916c:67e8:6e10:a19b/64
	I0501 04:15:31.790168    4352 ip.go:210] interface addr: 172.28.208.1/20
	I0501 04:15:31.802274    4352 ssh_runner.go:195] Run: grep 172.28.208.1	host.minikube.internal$ /etc/hosts
	I0501 04:15:31.809415    4352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.208.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:15:31.833544    4352 kubeadm.go:877] updating cluster {Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 04:15:31.833837    4352 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:15:31.845059    4352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 04:15:31.882700    4352 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 04:15:31.882700    4352 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 04:15:31.882700    4352 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 04:15:31.882700    4352 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0501 04:15:31.882700    4352 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0501 04:15:31.882700    4352 docker.go:615] Images already preloaded, skipping extraction
	I0501 04:15:31.893426    4352 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0501 04:15:31.918492    4352 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0501 04:15:31.918580    4352 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0501 04:15:31.918580    4352 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0501 04:15:31.918580    4352 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0501 04:15:31.918618    4352 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0501 04:15:31.918618    4352 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 04:15:31.918618    4352 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0501 04:15:31.918661    4352 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0501 04:15:31.918744    4352 cache_images.go:84] Images are preloaded, skipping loading
	I0501 04:15:31.918744    4352 kubeadm.go:928] updating node { 172.28.209.199 8443 v1.30.0 docker true true} ...
	I0501 04:15:31.919004    4352 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-289800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.209.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 04:15:31.930473    4352 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0501 04:15:31.963619    4352 command_runner.go:130] > cgroupfs
	I0501 04:15:31.963619    4352 cni.go:84] Creating CNI manager for ""
	I0501 04:15:31.963619    4352 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 04:15:31.963619    4352 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 04:15:31.963619    4352 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.209.199 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-289800 NodeName:multinode-289800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.209.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.209.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 04:15:31.963619    4352 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.209.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-289800"
	  kubeletExtraArgs:
	    node-ip: 172.28.209.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.209.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 04:15:31.976533    4352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 04:15:31.996468    4352 command_runner.go:130] > kubeadm
	I0501 04:15:31.996468    4352 command_runner.go:130] > kubectl
	I0501 04:15:31.996468    4352 command_runner.go:130] > kubelet
	I0501 04:15:31.996468    4352 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 04:15:32.009112    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 04:15:32.026737    4352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0501 04:15:32.064689    4352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 04:15:32.098828    4352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0501 04:15:32.145922    4352 ssh_runner.go:195] Run: grep 172.28.209.199	control-plane.minikube.internal$ /etc/hosts
	I0501 04:15:32.153373    4352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.209.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0501 04:15:32.189011    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:32.395009    4352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:15:32.425286    4352 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800 for IP: 172.28.209.199
	I0501 04:15:32.425360    4352 certs.go:194] generating shared ca certs ...
	I0501 04:15:32.425433    4352 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:32.425976    4352 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0501 04:15:32.426507    4352 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0501 04:15:32.426791    4352 certs.go:256] generating profile certs ...
	I0501 04:15:32.427525    4352 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\client.key
	I0501 04:15:32.427573    4352 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272
	I0501 04:15:32.427767    4352 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.209.199]
	I0501 04:15:32.890331    4352 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272 ...
	I0501 04:15:32.890331    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272: {Name:mk21d7382a5c76e493cdcfee0142e55c7ff2d410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:32.892500    4352 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272 ...
	I0501 04:15:32.892500    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272: {Name:mk918e27e5b7cad139e8fb039a59b6bb3e7d585f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:32.893061    4352 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt.98885272 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt
	I0501 04:15:32.906738    4352 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key.98885272 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key
	I0501 04:15:32.908375    4352 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key
	I0501 04:15:32.908375    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0501 04:15:32.909015    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0501 04:15:32.909069    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0501 04:15:32.909069    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0501 04:15:32.909069    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0501 04:15:32.909874    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0501 04:15:32.910225    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0501 04:15:32.910225    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0501 04:15:32.910824    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem (1338 bytes)
	W0501 04:15:32.911448    4352 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288_empty.pem, impossibly tiny 0 bytes
	I0501 04:15:32.911448    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0501 04:15:32.911448    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0501 04:15:32.912055    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0501 04:15:32.912055    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0501 04:15:32.912659    4352 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem (1708 bytes)
	I0501 04:15:32.913395    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:32.913613    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem -> /usr/share/ca-certificates/14288.pem
	I0501 04:15:32.913745    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /usr/share/ca-certificates/142882.pem
	I0501 04:15:32.915274    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 04:15:32.966578    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0501 04:15:33.016986    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 04:15:33.070060    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0501 04:15:33.120107    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 04:15:33.169536    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 04:15:33.218972    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 04:15:33.272477    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0501 04:15:33.322743    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 04:15:33.370278    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\14288.pem --> /usr/share/ca-certificates/14288.pem (1338 bytes)
	I0501 04:15:33.430157    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /usr/share/ca-certificates/142882.pem (1708 bytes)
	I0501 04:15:33.485494    4352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 04:15:33.549991    4352 ssh_runner.go:195] Run: openssl version
	I0501 04:15:33.558329    4352 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0501 04:15:33.571737    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14288.pem && ln -fs /usr/share/ca-certificates/14288.pem /etc/ssl/certs/14288.pem"
	I0501 04:15:33.608330    4352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.615480    4352 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.615480    4352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:27 /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.631646    4352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14288.pem
	I0501 04:15:33.644492    4352 command_runner.go:130] > 51391683
	I0501 04:15:33.658986    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14288.pem /etc/ssl/certs/51391683.0"
	I0501 04:15:33.695998    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142882.pem && ln -fs /usr/share/ca-certificates/142882.pem /etc/ssl/certs/142882.pem"
	I0501 04:15:33.733500    4352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.741187    4352 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.741187    4352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:27 /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.754725    4352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142882.pem
	I0501 04:15:33.765376    4352 command_runner.go:130] > 3ec20f2e
	I0501 04:15:33.778281    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142882.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 04:15:33.818201    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 04:15:33.854991    4352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.865184    4352 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.865184    4352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:12 /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.879144    4352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 04:15:33.888708    4352 command_runner.go:130] > b5213941
	I0501 04:15:33.901582    4352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 04:15:33.939426    4352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 04:15:33.949707    4352 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 04:15:33.949707    4352 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0501 04:15:33.949707    4352 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0501 04:15:33.949707    4352 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 04:15:33.949707    4352 command_runner.go:130] > Access: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.949904    4352 command_runner.go:130] > Modify: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.949904    4352 command_runner.go:130] > Change: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.949904    4352 command_runner.go:130] >  Birth: 2024-05-01 03:52:03.205304599 +0000
	I0501 04:15:33.962727    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 04:15:33.974038    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:33.988289    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 04:15:33.998318    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.012568    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 04:15:34.023671    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.035938    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 04:15:34.046394    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.059796    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 04:15:34.069370    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.083300    4352 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 04:15:34.094154    4352 command_runner.go:130] > Certificate will not expire
	I0501 04:15:34.094636    4352 kubeadm.go:391] StartCluster: {Name:multinode-289800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-289800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.209.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.219.162 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.223.145 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:15:34.108882    4352 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 04:15:34.149085    4352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0501 04:15:34.171011    4352 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0501 04:15:34.171087    4352 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0501 04:15:34.171087    4352 command_runner.go:130] > /var/lib/minikube/etcd:
	I0501 04:15:34.171087    4352 command_runner.go:130] > member
	W0501 04:15:34.171498    4352 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 04:15:34.171498    4352 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 04:15:34.171620    4352 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 04:15:34.185953    4352 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 04:15:34.209623    4352 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 04:15:34.210556    4352 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-289800" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:15:34.211057    4352 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-289800" cluster setting kubeconfig missing "multinode-289800" context setting]
	I0501 04:15:34.211674    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:34.226243    4352 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:15:34.226770    4352 kapi.go:59] client config for multinode-289800: &rest.Config{Host:"https://172.28.209.199:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-289800/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1b95ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0501 04:15:34.228218    4352 cert_rotation.go:137] Starting client certificate rotation controller
	I0501 04:15:34.240955    4352 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 04:15:34.260952    4352 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0501 04:15:34.260952    4352 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0501 04:15:34.260952    4352 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0501 04:15:34.260952    4352 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0501 04:15:34.260952    4352 command_runner.go:130] >  kind: InitConfiguration
	I0501 04:15:34.260952    4352 command_runner.go:130] >  localAPIEndpoint:
	I0501 04:15:34.260952    4352 command_runner.go:130] > -  advertiseAddress: 172.28.209.152
	I0501 04:15:34.260952    4352 command_runner.go:130] > +  advertiseAddress: 172.28.209.199
	I0501 04:15:34.260952    4352 command_runner.go:130] >    bindPort: 8443
	I0501 04:15:34.260952    4352 command_runner.go:130] >  bootstrapTokens:
	I0501 04:15:34.260952    4352 command_runner.go:130] >    - groups:
	I0501 04:15:34.260952    4352 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0501 04:15:34.260952    4352 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0501 04:15:34.260952    4352 command_runner.go:130] >    name: "multinode-289800"
	I0501 04:15:34.260952    4352 command_runner.go:130] >    kubeletExtraArgs:
	I0501 04:15:34.260952    4352 command_runner.go:130] > -    node-ip: 172.28.209.152
	I0501 04:15:34.260952    4352 command_runner.go:130] > +    node-ip: 172.28.209.199
	I0501 04:15:34.260952    4352 command_runner.go:130] >    taints: []
	I0501 04:15:34.260952    4352 command_runner.go:130] >  ---
	I0501 04:15:34.260952    4352 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0501 04:15:34.260952    4352 command_runner.go:130] >  kind: ClusterConfiguration
	I0501 04:15:34.260952    4352 command_runner.go:130] >  apiServer:
	I0501 04:15:34.260952    4352 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.209.152"]
	I0501 04:15:34.260952    4352 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.209.199"]
	I0501 04:15:34.260952    4352 command_runner.go:130] >    extraArgs:
	I0501 04:15:34.260952    4352 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0501 04:15:34.260952    4352 command_runner.go:130] >  controllerManager:
	I0501 04:15:34.260952    4352 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.209.152
	+  advertiseAddress: 172.28.209.199
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-289800"
	   kubeletExtraArgs:
	-    node-ip: 172.28.209.152
	+    node-ip: 172.28.209.199
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.209.152"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.209.199"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0501 04:15:34.260952    4352 kubeadm.go:1154] stopping kube-system containers ...
	I0501 04:15:34.270992    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0501 04:15:34.300955    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:15:34.300955    4352 command_runner.go:130] > ee2238f98e35
	I0501 04:15:34.301961    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:15:34.301961    4352 command_runner.go:130] > baf9e690eb53
	I0501 04:15:34.301961    4352 command_runner.go:130] > 9971ef577f2f
	I0501 04:15:34.301961    4352 command_runner.go:130] > 9d509d032dc6
	I0501 04:15:34.301961    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:15:34.301961    4352 command_runner.go:130] > 502684407b0c
	I0501 04:15:34.301961    4352 command_runner.go:130] > 79bb6a06ed52
	I0501 04:15:34.301961    4352 command_runner.go:130] > 4df6ba73bcf6
	I0501 04:15:34.301961    4352 command_runner.go:130] > 3244d1ee5ab4
	I0501 04:15:34.301961    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:15:34.301961    4352 command_runner.go:130] > bbbe9bf27685
	I0501 04:15:34.301961    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:15:34.301961    4352 command_runner.go:130] > f72a1c5b5cdd
	I0501 04:15:34.301961    4352 command_runner.go:130] > 479b3ec741be
	I0501 04:15:34.301961    4352 command_runner.go:130] > 976a9ff433cc
	I0501 04:15:34.301961    4352 command_runner.go:130] > a338ea43bd9b
	I0501 04:15:34.306243    4352 docker.go:483] Stopping containers: [15c4496e3a9f ee2238f98e35 3e8d5ff9a9e4 baf9e690eb53 9971ef577f2f 9d509d032dc6 6d5f881ef398 502684407b0c 79bb6a06ed52 4df6ba73bcf6 3244d1ee5ab4 4b62556f40be bbbe9bf27685 06f1f84bfde1 f72a1c5b5cdd 479b3ec741be 976a9ff433cc a338ea43bd9b]
	I0501 04:15:34.318171    4352 ssh_runner.go:195] Run: docker stop 15c4496e3a9f ee2238f98e35 3e8d5ff9a9e4 baf9e690eb53 9971ef577f2f 9d509d032dc6 6d5f881ef398 502684407b0c 79bb6a06ed52 4df6ba73bcf6 3244d1ee5ab4 4b62556f40be bbbe9bf27685 06f1f84bfde1 f72a1c5b5cdd 479b3ec741be 976a9ff433cc a338ea43bd9b
	I0501 04:15:34.352777    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:15:34.352854    4352 command_runner.go:130] > ee2238f98e35
	I0501 04:15:34.352911    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:15:34.352911    4352 command_runner.go:130] > baf9e690eb53
	I0501 04:15:34.352911    4352 command_runner.go:130] > 9971ef577f2f
	I0501 04:15:34.352911    4352 command_runner.go:130] > 9d509d032dc6
	I0501 04:15:34.352911    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:15:34.352911    4352 command_runner.go:130] > 502684407b0c
	I0501 04:15:34.352911    4352 command_runner.go:130] > 79bb6a06ed52
	I0501 04:15:34.353018    4352 command_runner.go:130] > 4df6ba73bcf6
	I0501 04:15:34.353018    4352 command_runner.go:130] > 3244d1ee5ab4
	I0501 04:15:34.353168    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:15:34.353168    4352 command_runner.go:130] > bbbe9bf27685
	I0501 04:15:34.353168    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:15:34.353168    4352 command_runner.go:130] > f72a1c5b5cdd
	I0501 04:15:34.353168    4352 command_runner.go:130] > 479b3ec741be
	I0501 04:15:34.353168    4352 command_runner.go:130] > 976a9ff433cc
	I0501 04:15:34.353168    4352 command_runner.go:130] > a338ea43bd9b
	I0501 04:15:34.366922    4352 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 04:15:34.411972    4352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0501 04:15:34.432098    4352 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 04:15:34.432098    4352 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0501 04:15:34.432098    4352 kubeadm.go:156] found existing configuration files:
	
	I0501 04:15:34.447151    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0501 04:15:34.466643    4352 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 04:15:34.467481    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0501 04:15:34.481495    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0501 04:15:34.514013    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0501 04:15:34.530843    4352 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 04:15:34.530843    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0501 04:15:34.543860    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0501 04:15:34.578503    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0501 04:15:34.597585    4352 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 04:15:34.598091    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0501 04:15:34.613522    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 04:15:34.647336    4352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0501 04:15:34.670140    4352 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 04:15:34.670817    4352 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0501 04:15:34.687503    4352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 04:15:34.723592    4352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 04:15:34.746967    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:35.072697    4352 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0501 04:15:35.072765    4352 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0501 04:15:35.072817    4352 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0501 04:15:35.072885    4352 command_runner.go:130] > [certs] Using the existing "sa" key
	I0501 04:15:35.072885    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.392186    4352 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 04:15:36.392186    4352 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 04:15:36.392305    4352 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 04:15:36.392305    4352 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3193546s)
	I0501 04:15:36.392413    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.709077    4352 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 04:15:36.709077    4352 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 04:15:36.709077    4352 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0501 04:15:36.709077    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.808642    4352 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 04:15:36.808874    4352 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 04:15:36.818113    4352 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 04:15:36.819722    4352 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 04:15:36.831140    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:36.942295    4352 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 04:15:36.942295    4352 api_server.go:52] waiting for apiserver process to appear ...
	I0501 04:15:36.958675    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:37.460033    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:37.961693    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:38.470310    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:38.958889    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:15:38.983430    4352 command_runner.go:130] > 1873
	I0501 04:15:38.983546    4352 api_server.go:72] duration metric: took 2.041236s to wait for apiserver process to appear ...
	I0501 04:15:38.983615    4352 api_server.go:88] waiting for apiserver healthz status ...
	I0501 04:15:38.983669    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:42.390528    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 04:15:42.390528    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 04:15:42.390722    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:42.537044    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:42.537399    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:42.537399    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:42.546792    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:42.546792    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:42.993584    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:43.000750    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:43.001812    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:43.485992    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:43.510084    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 04:15:43.510605    4352 api_server.go:103] status: https://172.28.209.199:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 04:15:43.995080    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:15:44.017664    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 200:
	ok
	I0501 04:15:44.018164    4352 round_trippers.go:463] GET https://172.28.209.199:8443/version
	I0501 04:15:44.018164    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:44.018164    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:44.018164    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:44.047730    4352 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0501 04:15:44.048196    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:44.048256    4352 round_trippers.go:580]     Audit-Id: 65811502-b9b2-4c06-a707-b36953dc64a0
	I0501 04:15:44.048256    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:44.048256    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:44.048319    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:44.048319    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:44.048319    4352 round_trippers.go:580]     Content-Length: 263
	I0501 04:15:44.048381    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:44 GMT
	I0501 04:15:44.048443    4352 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 04:15:44.048720    4352 api_server.go:141] control plane version: v1.30.0
	I0501 04:15:44.048770    4352 api_server.go:131] duration metric: took 5.0651163s to wait for apiserver health ...
	I0501 04:15:44.048829    4352 cni.go:84] Creating CNI manager for ""
	I0501 04:15:44.048901    4352 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0501 04:15:44.052194    4352 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0501 04:15:44.070532    4352 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0501 04:15:44.080313    4352 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0501 04:15:44.080313    4352 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0501 04:15:44.081300    4352 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0501 04:15:44.081369    4352 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0501 04:15:44.081413    4352 command_runner.go:130] > Access: 2024-05-01 04:14:09.889750900 +0000
	I0501 04:15:44.081478    4352 command_runner.go:130] > Modify: 2024-04-30 23:29:30.000000000 +0000
	I0501 04:15:44.081478    4352 command_runner.go:130] > Change: 2024-05-01 04:13:59.112000000 +0000
	I0501 04:15:44.081539    4352 command_runner.go:130] >  Birth: -
	I0501 04:15:44.081643    4352 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0501 04:15:44.081643    4352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0501 04:15:44.161304    4352 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0501 04:15:45.347093    4352 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0501 04:15:45.347093    4352 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0501 04:15:45.347093    4352 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0501 04:15:45.347213    4352 command_runner.go:130] > daemonset.apps/kindnet configured
	I0501 04:15:45.347213    4352 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1859005s)
	I0501 04:15:45.347213    4352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 04:15:45.347213    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:15:45.347213    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.347213    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.347213    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.355046    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:15:45.355046    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Audit-Id: 6940c17b-b650-411a-a60f-d8e97978e311
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.355141    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.355141    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.355141    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.356769    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 95624 chars]
	I0501 04:15:45.365322    4352 system_pods.go:59] 13 kube-system pods found
	I0501 04:15:45.365384    4352 system_pods.go:61] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 04:15:45.365384    4352 system_pods.go:61] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 04:15:45.365384    4352 system_pods.go:61] "etcd-multinode-289800" [aaf534b6-9f4c-445d-afb9-bd225e1a77fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kindnet-4m5vg" [4d06e665-b4c1-40b9-bbb8-c35bfe35385e] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kindnet-gzz7p" [576f33f3-f244-48f0-ae69-30c8f38ed871] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-apiserver-multinode-289800" [0ee77673-e4b3-4fba-a855-ef6876337257] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-proxy-g8mbm" [ef0e1817-6682-4b8f-affa-c10021247006] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-proxy-rlzp8" [b37d8d5d-a7cb-4848-a8a2-11d9761e08d6] Running
	I0501 04:15:45.365384    4352 system_pods.go:61] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 04:15:45.365384    4352 system_pods.go:61] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 04:15:45.365384    4352 system_pods.go:74] duration metric: took 18.1702ms to wait for pod list to return data ...
	I0501 04:15:45.365384    4352 node_conditions.go:102] verifying NodePressure condition ...
	I0501 04:15:45.365384    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes
	I0501 04:15:45.365384    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.365384    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.365384    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.371080    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:45.371346    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.371437    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Audit-Id: a8607618-1cb7-49a0-9625-a62dfc1110fe
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.371512    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.371512    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.371512    4352 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1832"},"items":[{"metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15631 chars]
	I0501 04:15:45.373004    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:15:45.373004    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:15:45.373004    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:15:45.373004    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:15:45.373004    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:15:45.373004    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:15:45.373004    4352 node_conditions.go:105] duration metric: took 7.6204ms to run NodePressure ...
	I0501 04:15:45.373004    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 04:15:45.706557    4352 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0501 04:15:45.852447    4352 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0501 04:15:45.855768    4352 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 04:15:45.855768    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0501 04:15:45.855768    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.855768    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.855768    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.866929    4352 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0501 04:15:45.867693    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Audit-Id: 9592acb9-5669-42e4-84dc-1773eaf73c9f
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.867753    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.867753    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.867753    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.869064    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"aaf534b6-9f4c-445d-afb9-bd225e1a77fd","resourceVersion":"1787","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.199:2379","kubernetes.io/config.hash":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.mirror":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.seen":"2024-05-01T04:15:36.949387188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0501 04:15:45.871088    4352 kubeadm.go:733] kubelet initialised
	I0501 04:15:45.871088    4352 kubeadm.go:734] duration metric: took 15.3202ms waiting for restarted kubelet to initialise ...
	I0501 04:15:45.871222    4352 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:15:45.871442    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:15:45.871487    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.871511    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.871547    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.885917    4352 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 04:15:45.885917    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Audit-Id: 61cf7153-b41b-452a-9ba2-2f7e0629d0ad
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.885917    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.885917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.885917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.890094    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1838"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 95031 chars]
	I0501 04:15:45.895872    4352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.895872    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:15:45.895872    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.895872    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.895872    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.901983    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:15:45.902972    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.903044    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.903044    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.903044    4352 round_trippers.go:580]     Audit-Id: bf95ab8e-2bae-4c71-a0e4-1f376042c3c0
	I0501 04:15:45.903106    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.903106    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.903106    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.903438    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:15:45.904479    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.904593    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.904593    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.904593    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.917439    4352 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 04:15:45.917439    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.917439    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.917439    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Audit-Id: cf6df549-6b21-4a3d-bf53-420b34db8dab
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.917439    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.917439    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.919235    4352 pod_ready.go:97] node "multinode-289800" hosting pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.919293    4352 pod_ready.go:81] duration metric: took 23.4207ms for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.919354    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.919354    4352 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.919517    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x9zrw
	I0501 04:15:45.919517    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.919564    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.919564    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.924414    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:45.924876    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.924876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.924955    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Audit-Id: 107f938a-1d74-461a-a7aa-f097a0122ac4
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.925020    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.925383    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x9zrw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0b91b14d-bed3-4889-b193-db53daccd395","resourceVersion":"1804","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:15:45.926520    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.926520    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.926520    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.926520    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.929121    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:15:45.929472    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.929610    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.929610    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Audit-Id: cea7c015-3170-4a45-bd7a-803a8a130a3a
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.929610    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.929610    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.930356    4352 pod_ready.go:97] node "multinode-289800" hosting pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.930356    4352 pod_ready.go:81] duration metric: took 11.0021ms for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.930356    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.930356    4352 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.930356    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-289800
	I0501 04:15:45.930356    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.930356    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.930356    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.933612    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:45.933794    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Audit-Id: 413f6909-b2ac-4fc4-b424-a3ac3e45a552
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.933872    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.933872    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.933872    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.933872    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"aaf534b6-9f4c-445d-afb9-bd225e1a77fd","resourceVersion":"1787","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.199:2379","kubernetes.io/config.hash":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.mirror":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.seen":"2024-05-01T04:15:36.949387188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0501 04:15:45.934516    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.934516    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.934516    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.934516    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.939340    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:45.939626    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Audit-Id: 7dac56e8-effd-4264-9d01-23d90070529b
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.939626    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.939626    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.939626    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.939626    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.940214    4352 pod_ready.go:97] node "multinode-289800" hosting pod "etcd-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.940214    4352 pod_ready.go:81] duration metric: took 9.858ms for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.940214    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "etcd-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.940214    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:45.940214    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-289800
	I0501 04:15:45.940214    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.940214    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.940214    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.945813    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:45.945968    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.945968    4352 round_trippers.go:580]     Audit-Id: 39b8e89d-8bc2-4446-a5ce-9d373ce72c55
	I0501 04:15:45.946046    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.946118    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.946164    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.946164    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.946164    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.946164    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-289800","namespace":"kube-system","uid":"0ee77673-e4b3-4fba-a855-ef6876337257","resourceVersion":"1791","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.209.199:8443","kubernetes.io/config.hash":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.mirror":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.seen":"2024-05-01T04:15:36.865099961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0501 04:15:45.946866    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:45.946866    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:45.946866    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:45.946866    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:45.958997    4352 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0501 04:15:45.958997    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:45.958997    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:45.958997    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:45 GMT
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Audit-Id: dbc478e9-cf5a-4ec2-907b-30470a458bf0
	I0501 04:15:45.958997    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:45.959603    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:45.959775    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-apiserver-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.959775    4352 pod_ready.go:81] duration metric: took 19.5604ms for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:45.959775    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-apiserver-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:45.959775    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:46.064844    4352 request.go:629] Waited for 104.8895ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 04:15:46.065149    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 04:15:46.065212    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.065212    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.065212    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.069365    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:46.069365    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.069365    4352 round_trippers.go:580]     Audit-Id: ae5979d5-574f-45ee-a467-bdef19e12134
	I0501 04:15:46.069365    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.069365    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.069365    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.069508    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.069508    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.069849    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-289800","namespace":"kube-system","uid":"fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318","resourceVersion":"1784","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.mirror":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.seen":"2024-05-01T03:52:15.688763845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0501 04:15:46.267422    4352 request.go:629] Waited for 196.4167ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.267660    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.267660    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.267660    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.267660    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.272484    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:46.273074    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.273074    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.273074    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Audit-Id: b8b332d9-d669-4b52-bbd6-3af87c591e23
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.273074    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.273515    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:46.274223    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-controller-manager-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.274223    4352 pod_ready.go:81] duration metric: took 314.4458ms for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:46.274223    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-controller-manager-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.274299    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:46.471449    4352 request.go:629] Waited for 196.8989ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:15:46.471449    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:15:46.471449    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.471449    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.471449    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.479072    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:15:46.479272    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.479272    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Audit-Id: ed5a791f-e3a7-4eb0-a8ee-9e7a80d296ce
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.479272    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.479340    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.479340    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bp9zx","generateName":"kube-proxy-","namespace":"kube-system","uid":"aba82e50-b8f8-40b4-b08a-6d045314d6b6","resourceVersion":"1834","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0501 04:15:46.660216    4352 request.go:629] Waited for 180.1ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.660484    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:46.660484    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.660484    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.660484    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.663207    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:15:46.663207    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.663207    4352 round_trippers.go:580]     Audit-Id: a74b4348-a089-479d-bf0b-155a827ff806
	I0501 04:15:46.663207    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.663207    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.664235    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.664235    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.664268    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.664594    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:46.664791    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-proxy-bp9zx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.664791    4352 pod_ready.go:81] duration metric: took 390.4882ms for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:46.664791    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-proxy-bp9zx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:46.664791    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:46.863953    4352 request.go:629] Waited for 198.4563ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:15:46.864151    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:15:46.864151    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:46.864151    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:46.864151    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:46.868962    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:46.868962    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:46.868962    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:46 GMT
	I0501 04:15:46.869032    4352 round_trippers.go:580]     Audit-Id: 52e24c43-762c-483f-a27a-e54728c63ec2
	I0501 04:15:46.869032    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:46.869032    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:46.869032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:46.869032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:46.869262    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g8mbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef0e1817-6682-4b8f-affa-c10021247006","resourceVersion":"1723","creationTimestamp":"2024-05-01T04:00:13Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:00:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0501 04:15:47.066662    4352 request.go:629] Waited for 196.6777ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:15:47.066662    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:15:47.066662    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.066662    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.066662    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.070662    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:47.070662    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Audit-Id: 0b0340f7-6804-4106-bcb4-dcd8920a9124
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.070662    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.070662    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.070662    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.070662    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m03","uid":"851df850-b222-4fa2-aca7-3694c4d89ab5","resourceVersion":"1732","creationTimestamp":"2024-05-01T04:11:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T04_11_04_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:11:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0501 04:15:47.071685    4352 pod_ready.go:97] node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:15:47.071685    4352 pod_ready.go:81] duration metric: took 406.8916ms for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:47.071685    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:15:47.071685    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:47.257375    4352 request.go:629] Waited for 185.5968ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:15:47.257530    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:15:47.257530    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.257530    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.257530    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.261290    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:47.261290    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.261290    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.262276    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.262276    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.262342    4352 round_trippers.go:580]     Audit-Id: b1d2e368-5383-4d88-ab26-16d4131cc9b2
	I0501 04:15:47.262342    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.262342    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.262342    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rlzp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b37d8d5d-a7cb-4848-a8a2-11d9761e08d6","resourceVersion":"596","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0501 04:15:47.459189    4352 request.go:629] Waited for 195.5087ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:15:47.459189    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:15:47.459189    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.459189    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.459189    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.462997    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:47.462997    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.462997    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.462997    4352 round_trippers.go:580]     Audit-Id: 10a094ce-fbc2-4fad-b9c0-f0a070ebd31b
	I0501 04:15:47.462997    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.463641    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.463641    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.463641    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.463753    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"1663","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0501 04:15:47.464467    4352 pod_ready.go:92] pod "kube-proxy-rlzp8" in "kube-system" namespace has status "Ready":"True"
	I0501 04:15:47.464467    4352 pod_ready.go:81] duration metric: took 392.7793ms for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:47.464467    4352 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:15:47.661121    4352 request.go:629] Waited for 196.2048ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:15:47.661121    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:15:47.661400    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.661400    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.661400    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.666880    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:47.666880    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.666880    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.666880    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Audit-Id: 7788b5d8-b650-426b-8caa-c24dc9823280
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.666880    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.666880    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-289800","namespace":"kube-system","uid":"c7518f03-993b-432f-b742-8805dd2167a7","resourceVersion":"1772","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.mirror":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.seen":"2024-05-01T03:52:15.688771544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0501 04:15:47.862396    4352 request.go:629] Waited for 194.4449ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:47.862506    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:47.862506    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:47.862643    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:47.862643    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:47.868378    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:47.868378    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:47.869283    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:47.869283    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:47 GMT
	I0501 04:15:47.869283    4352 round_trippers.go:580]     Audit-Id: 30ec4f63-1c56-493b-a8d4-6e266d70a896
	I0501 04:15:47.870428    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:47.870428    4352 pod_ready.go:97] node "multinode-289800" hosting pod "kube-scheduler-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:47.870953    4352 pod_ready.go:81] duration metric: took 406.4822ms for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	E0501 04:15:47.871166    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800" hosting pod "kube-scheduler-multinode-289800" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800" has status "Ready":"False"
	I0501 04:15:47.871166    4352 pod_ready.go:38] duration metric: took 1.999929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:15:47.871166    4352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 04:15:47.893189    4352 command_runner.go:130] > -16
	I0501 04:15:47.893300    4352 ops.go:34] apiserver oom_adj: -16
	I0501 04:15:47.893369    4352 kubeadm.go:591] duration metric: took 13.7215764s to restartPrimaryControlPlane
	I0501 04:15:47.893369    4352 kubeadm.go:393] duration metric: took 13.7986296s to StartCluster
	I0501 04:15:47.893452    4352 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:47.893677    4352 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:15:47.896371    4352 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:15:47.897811    4352 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.209.199 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 04:15:47.901638    4352 out.go:177] * Verifying Kubernetes components...
	I0501 04:15:47.897811    4352 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 04:15:47.898391    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:15:47.907154    4352 out.go:177] * Enabled addons: 
	I0501 04:15:47.911297    4352 addons.go:505] duration metric: took 13.4858ms for enable addons: enabled=[]
	I0501 04:15:47.918412    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:15:48.243326    4352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:15:48.272616    4352 node_ready.go:35] waiting up to 6m0s for node "multinode-289800" to be "Ready" ...
	I0501 04:15:48.272973    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:48.272973    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:48.272973    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:48.272973    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:48.282282    4352 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 04:15:48.282480    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:48 GMT
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Audit-Id: 4a57aa77-8ec5-419d-84f3-816369f06b0e
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:48.282642    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:48.282642    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:48.282642    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:48.282642    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:48.787265    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:48.787358    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:48.787358    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:48.787358    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:48.791855    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:48.791855    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:48.791855    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:48.791855    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:48 GMT
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Audit-Id: 47f87a71-63f7-4b6f-9f6b-13471684910e
	I0501 04:15:48.791855    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:48.792573    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:49.274535    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:49.274629    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:49.274629    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:49.274629    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:49.280040    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:49.280151    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:49.280151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:49.280151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:49 GMT
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Audit-Id: 1768b44c-f3a8-41df-8a61-60d9217fe7c4
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:49.280151    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:49.280423    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:49.788682    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:49.788866    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:49.788866    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:49.788866    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:49.793227    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:49.794076    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:49 GMT
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Audit-Id: d96f125d-25ab-4f24-8139-8af631ccb4a9
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:49.794076    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:49.794076    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:49.794076    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:49.794558    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:50.287503    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:50.287503    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:50.287503    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:50.287503    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:50.288034    4352 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0501 04:15:50.288034    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:50.288034    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:50.288034    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:50 GMT
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Audit-Id: 846b0fee-03aa-47b1-ad08-4de236adaa1e
	I0501 04:15:50.288034    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:50.288034    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:50.288034    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:50.778566    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:50.778566    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:50.778566    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:50.778566    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:50.783842    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:50.783842    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Audit-Id: 2f2a8f4d-8f94-4178-89e2-6d10a7e0adb7
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:50.783842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:50.783842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:50.783842    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:50 GMT
	I0501 04:15:50.783842    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:51.281622    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:51.281703    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:51.281703    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:51.281703    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:51.286450    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:51.286978    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:51.286978    4352 round_trippers.go:580]     Audit-Id: 8c1d2f49-ea8b-496b-939c-772f4e5a9a02
	I0501 04:15:51.286978    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:51.286978    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:51.286978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:51.286978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:51.287053    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:51 GMT
	I0501 04:15:51.287194    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:51.782621    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:51.782621    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:51.782621    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:51.782621    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:51.786249    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:51.786249    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:51.786340    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:51 GMT
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Audit-Id: 8bbc6e7d-7882-4eea-9cef-3023dcab188b
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:51.786340    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:51.786340    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:51.786732    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:52.283159    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:52.283159    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:52.283159    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:52.283159    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:52.286738    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:52.286738    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:52.286738    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:52.286738    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:52 GMT
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Audit-Id: 25fe7eae-b1d2-4a23-8d9b-f3d66437714f
	I0501 04:15:52.286738    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:52.287601    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:52.288076    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:52.785448    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:52.785763    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:52.785763    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:52.785763    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:52.790167    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:52.790404    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Audit-Id: b19c18ca-d186-45a6-a886-00c5b08576f9
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:52.790404    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:52.790404    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:52.790404    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:52 GMT
	I0501 04:15:52.790737    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:53.286896    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:53.287010    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:53.287010    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:53.287010    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:53.291397    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:53.291622    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Audit-Id: c904bec9-9029-4fb8-a21a-181d712dfc3d
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:53.291622    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:53.291622    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:53.291622    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:53 GMT
	I0501 04:15:53.291750    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:53.786591    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:53.786591    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:53.786591    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:53.786591    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:53.793750    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:15:53.793837    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Audit-Id: 5b2d1749-2b55-4728-9791-cbdb50184746
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:53.793863    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:53.793863    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:53.793863    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:53 GMT
	I0501 04:15:53.793863    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:54.286294    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:54.286294    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:54.286294    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:54.286294    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:54.292917    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:15:54.293755    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:54.293755    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:54 GMT
	I0501 04:15:54.293755    4352 round_trippers.go:580]     Audit-Id: 2a2643e0-123b-49eb-bf48-a849730c99af
	I0501 04:15:54.293810    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:54.293810    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:54.293810    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:54.293810    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:54.295596    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:54.297738    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:54.784587    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:54.784587    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:54.784587    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:54.784587    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:54.788391    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:54.789473    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Audit-Id: 31896e28-ad42-44d7-bab5-1eea08d5c50f
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:54.789528    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:54.789528    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:54.789528    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:54 GMT
	I0501 04:15:54.789772    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:55.286356    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:55.286356    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:55.286356    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:55.286356    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:55.290854    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:55.290854    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:55.290854    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:55.290854    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:55.291225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:55.291225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:55.291225    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:55 GMT
	I0501 04:15:55.291225    4352 round_trippers.go:580]     Audit-Id: a8a942bb-061f-4162-b25e-cd3d146f4a1f
	I0501 04:15:55.291397    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1752","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0501 04:15:55.778111    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:55.778111    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:55.778111    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:55.778111    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:55.784090    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:55.785102    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:55.785102    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:55.785102    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:55.785102    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:55.785102    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:55 GMT
	I0501 04:15:55.785102    4352 round_trippers.go:580]     Audit-Id: dcc40508-7d25-4230-b7ad-5c8ff24cec7e
	I0501 04:15:55.785166    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:55.785590    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:56.280270    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:56.280270    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:56.280270    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:56.280270    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:56.284475    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:56.284475    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:56.284475    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:56 GMT
	I0501 04:15:56.284475    4352 round_trippers.go:580]     Audit-Id: ef544fa9-9550-4e83-9281-5c270f5af74e
	I0501 04:15:56.284566    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:56.284566    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:56.284566    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:56.284566    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:56.284636    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:56.781859    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:56.781859    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:56.781859    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:56.781859    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:56.786560    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:15:56.786560    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:56 GMT
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Audit-Id: 80c38c85-3fad-4ae4-a83f-93fd20becab9
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:56.786560    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:56.786693    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:56.786693    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:56.787378    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:56.788095    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:57.284544    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:57.284626    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:57.284626    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:57.284626    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:57.288535    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:57.289557    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:57.289557    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:57.289635    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:57.289635    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:57.289635    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:57 GMT
	I0501 04:15:57.289635    4352 round_trippers.go:580]     Audit-Id: 152017fc-7a4d-4e6b-b96a-ae3816222518
	I0501 04:15:57.289635    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:57.290083    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:57.785247    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:57.785247    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:57.785247    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:57.785247    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:57.788826    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:57.789259    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:57.789259    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:57.789259    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:57 GMT
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Audit-Id: 8ef1c498-eaf2-407d-b468-651d8f957ced
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:57.789259    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:57.789448    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:58.284301    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:58.284432    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:58.284432    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:58.284432    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:58.287917    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:58.287917    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:58.288947    4352 round_trippers.go:580]     Audit-Id: 9ee0f450-23ad-416c-b0cb-12b23ea707af
	I0501 04:15:58.288947    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:58.289014    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:58.289014    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:58.289014    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:58.289014    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:58 GMT
	I0501 04:15:58.289365    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:58.782461    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:58.782461    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:58.782461    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:58.782461    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:58.785850    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:58.786943    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:58.786978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:58 GMT
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Audit-Id: 2361e6cd-3bbf-4c3a-bcac-644db6710a62
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:58.786978    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:58.786978    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:58.787370    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:59.280238    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:59.280298    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:59.280298    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:59.280298    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:59.283833    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:15:59.283833    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:59 GMT
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Audit-Id: 409797c8-6ed1-4a53-b286-69eaa7982225
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:59.283833    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:59.283833    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:59.283833    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:59.284879    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:15:59.285630    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:15:59.781394    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:15:59.781394    4352 round_trippers.go:469] Request Headers:
	I0501 04:15:59.781394    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:15:59.781394    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:15:59.787017    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:15:59.787017    4352 round_trippers.go:577] Response Headers:
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Audit-Id: 58129097-ccd1-4628-829d-c2723e9a96ef
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:15:59.787208    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:15:59.787208    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:15:59.787208    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:15:59 GMT
	I0501 04:15:59.787512    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:00.282421    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:00.282421    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:00.282421    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:00.282421    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:00.286686    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:00.286686    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:00.286686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:00 GMT
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Audit-Id: bc24d01d-4737-4e59-99f1-811d1d5a37b0
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:00.286686    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:00.286686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:00.286686    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:00.781873    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:00.781873    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:00.782100    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:00.782100    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:00.789095    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:00.789095    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Audit-Id: 580acb65-7c60-4bb9-b08a-2e2c0a282e83
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:00.789095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:00.789095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:00.789095    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:00 GMT
	I0501 04:16:00.789095    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:01.281789    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:01.281893    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:01.281893    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:01.281893    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:01.286265    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:01.286265    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:01.286265    4352 round_trippers.go:580]     Audit-Id: 127a026d-aed3-4c32-8fe1-82ffe2f6142f
	I0501 04:16:01.286265    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:01.286265    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:01.286650    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:01.286650    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:01.286650    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:01 GMT
	I0501 04:16:01.286754    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:01.287299    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:01.777714    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:01.777798    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:01.777798    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:01.777798    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:01.781555    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:01.781854    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Audit-Id: f987c4c1-7de3-441c-858d-f0e0cd58f371
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:01.781854    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:01.781854    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:01.781854    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:01 GMT
	I0501 04:16:01.782654    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:02.276440    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:02.276440    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:02.276440    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:02.276440    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:02.281296    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:02.281296    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:02.281296    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:02.281296    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:02.281777    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:02.281777    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:02 GMT
	I0501 04:16:02.281777    4352 round_trippers.go:580]     Audit-Id: d8112143-73ac-4f37-bdda-98e47db0572c
	I0501 04:16:02.281777    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:02.282345    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:02.774933    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:02.774933    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:02.774933    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:02.774933    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:02.778515    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:02.779403    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:02.779403    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:02.779403    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:02 GMT
	I0501 04:16:02.779403    4352 round_trippers.go:580]     Audit-Id: b706864e-0b0f-45b3-b488-504163fe46bc
	I0501 04:16:02.779501    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:03.274107    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:03.274107    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:03.274341    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:03.274341    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:03.277851    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:03.277851    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Audit-Id: 18bf9435-3181-4b45-b60c-2432de6e8bbe
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:03.277851    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:03.277851    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:03.277851    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:03 GMT
	I0501 04:16:03.278454    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:03.786776    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:03.786776    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:03.786776    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:03.786776    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:03.793984    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:16:03.793984    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:03.793984    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:03 GMT
	I0501 04:16:03.794266    4352 round_trippers.go:580]     Audit-Id: f1eddf3a-ad39-41d1-b323-72439e875600
	I0501 04:16:03.794266    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:03.794266    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:03.794266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:03.794266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:03.794629    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:03.795267    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:04.276874    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:04.276961    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:04.276961    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:04.276961    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:04.280505    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:04.280505    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:04.281430    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:04 GMT
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Audit-Id: 412a54b9-cf90-4b22-bc99-dbeeace3b317
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:04.281430    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:04.281430    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:04.281660    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:04.774293    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:04.774293    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:04.774293    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:04.774293    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:04.778996    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:04.778996    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:04.778996    4352 round_trippers.go:580]     Audit-Id: 1cd2da53-f2e5-4851-9a6a-f918b798f49d
	I0501 04:16:04.778996    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:04.778996    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:04.779394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:04.779394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:04.779394    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:04 GMT
	I0501 04:16:04.779477    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:05.273712    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:05.273712    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:05.273712    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:05.273712    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:05.278375    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:05.279224    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:05.279224    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:05 GMT
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Audit-Id: 3b7ff919-c9c0-4f68-acf5-c0d8f117a3a7
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:05.279224    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:05.279224    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:05.279479    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:05.787596    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:05.787596    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:05.787596    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:05.787596    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:05.791175    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:05.791876    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Audit-Id: 55509513-7a21-4389-99fb-4db955af6859
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:05.791876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:05.791876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:05.791876    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:05 GMT
	I0501 04:16:05.792286    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:06.276021    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:06.276196    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:06.276196    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:06.276196    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:06.280606    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:06.281228    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:06.281228    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:06.281228    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:06.281228    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:06.281295    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:06 GMT
	I0501 04:16:06.281295    4352 round_trippers.go:580]     Audit-Id: 675ee840-2ea2-44fe-8a03-b42045e1f0e7
	I0501 04:16:06.281295    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:06.281586    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:06.282124    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:06.779197    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:06.779197    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:06.779197    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:06.779197    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:06.784019    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:06.784019    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:06 GMT
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Audit-Id: 81c075a2-638c-4696-b9ba-156b8f6b071f
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:06.784185    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:06.784185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:06.784185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:06.784625    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:07.282365    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:07.282365    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:07.282365    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:07.282365    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:07.286287    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:07.286287    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Audit-Id: 527346f1-1644-4f27-a695-c38f3c37a301
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:07.286287    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:07.286287    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:07.286287    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:07 GMT
	I0501 04:16:07.286287    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:07.781898    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:07.781898    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:07.782019    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:07.782019    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:07.786849    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:07.786849    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:07.786988    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:07.786988    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:07 GMT
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Audit-Id: 053c4546-2924-4540-ace3-7c91d714b209
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:07.786988    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:07.787251    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:08.279999    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:08.280226    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:08.280226    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:08.280226    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:08.284744    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:08.284970    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:08.284970    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:08.284970    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:08 GMT
	I0501 04:16:08.284970    4352 round_trippers.go:580]     Audit-Id: 9c101801-8335-4058-ae95-40cff99cbd5d
	I0501 04:16:08.285228    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:08.285826    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:08.780721    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:08.780721    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:08.780721    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:08.780721    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:08.784959    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:08.785185    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:08.785185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:08.785185    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:08 GMT
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Audit-Id: 41b136d0-ba63-48f6-9150-3594e00186eb
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:08.785185    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:08.785391    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:09.282934    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:09.282999    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:09.283056    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:09.283056    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:09.291637    4352 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0501 04:16:09.291637    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Audit-Id: 735d921c-c3c5-48c3-b57f-cec1a64b7da1
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:09.291637    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:09.291637    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:09.291637    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:09 GMT
	I0501 04:16:09.292402    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:09.784415    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:09.784482    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:09.784482    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:09.784543    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:09.790985    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:09.791443    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Audit-Id: a1986b54-83fe-4122-bcf5-ed313aee165f
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:09.791443    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:09.791443    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:09.791443    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:09 GMT
	I0501 04:16:09.791443    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:10.282583    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:10.282657    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:10.282657    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:10.282657    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:10.286598    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:10.286876    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Audit-Id: ecb65f41-7089-4d7b-bd9b-adfdde338412
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:10.286876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:10.286876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:10.286876    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:10 GMT
	I0501 04:16:10.287189    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:10.287741    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:10.779992    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:10.779992    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:10.779992    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:10.779992    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:10.784609    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:10.784609    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:10.784609    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:10.784876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:10.784876    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:10.784876    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:10 GMT
	I0501 04:16:10.784876    4352 round_trippers.go:580]     Audit-Id: 5e3b983d-4933-4159-a8ba-8c2f248e4d84
	I0501 04:16:10.784876    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:10.785314    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:11.282786    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:11.282786    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:11.282786    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:11.282786    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:11.286414    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:11.287118    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:11.287118    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:11.287118    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:11 GMT
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Audit-Id: 0ab35d6a-8bdd-4a51-b2cc-da1f70758e59
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:11.287118    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:11.287331    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:11.781562    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:11.781562    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:11.781562    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:11.781562    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:11.785310    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:11.785310    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:11.785310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:11.785310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:11 GMT
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Audit-Id: b680cdcf-00e1-4931-bc8f-fd19ece8fce2
	I0501 04:16:11.785310    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:11.786869    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:12.279100    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:12.279100    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:12.279100    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:12.279100    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:12.284605    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:12.284605    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:12.284605    4352 round_trippers.go:580]     Audit-Id: 0bc560a5-5e5b-4137-899f-f2a011034f8f
	I0501 04:16:12.284605    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:12.284605    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:12.284605    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:12.285568    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:12.285633    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:12 GMT
	I0501 04:16:12.285808    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:12.777628    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:12.777628    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:12.777628    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:12.777628    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:12.781334    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:12.781334    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:12.781334    4352 round_trippers.go:580]     Audit-Id: 10ce18f9-4d86-4b6c-a244-e491bf165a3b
	I0501 04:16:12.781334    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:12.781896    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:12.781896    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:12.781896    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:12.781896    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:12 GMT
	I0501 04:16:12.782180    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:12.783217    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:13.277312    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:13.277312    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:13.277312    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:13.277312    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:13.281031    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:13.281723    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:13.281723    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:13.281723    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:13 GMT
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Audit-Id: 6ed090f9-f02b-46b1-8ced-7eb682fa8f03
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:13.281723    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:13.281991    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:13.778150    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:13.778150    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:13.778150    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:13.778150    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:13.781820    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:13.781820    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:13 GMT
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Audit-Id: 226d9499-ed9a-49f4-95d1-c4264b7da82b
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:13.781820    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:13.782629    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:13.782629    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:13.782669    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:14.275454    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:14.275454    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:14.275454    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:14.275454    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:14.280017    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:14.280017    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Audit-Id: f10613b2-0dff-4f93-8436-c0ffdd5ab9f2
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:14.280017    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:14.280017    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:14.280017    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:14 GMT
	I0501 04:16:14.281082    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:14.779083    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:14.779083    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:14.779083    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:14.779083    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:14.782926    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:14.783782    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Audit-Id: a5bfde4f-088d-409f-89dc-4199d535b4ee
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:14.783782    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:14.783782    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:14.783782    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:14 GMT
	I0501 04:16:14.784653    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:14.784798    4352 node_ready.go:53] node "multinode-289800" has status "Ready":"False"
	I0501 04:16:15.280754    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:15.280994    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:15.281067    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:15.281067    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:15.284922    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:15.285331    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:15 GMT
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Audit-Id: 514cdae5-f5a2-4a39-80a6-e9c01a302d0c
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:15.285331    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:15.285331    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:15.285331    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:15.285563    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:15.779647    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:15.779647    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:15.779647    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:15.779647    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:15.782044    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:15.782883    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:15.782883    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:15 GMT
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Audit-Id: 293525a7-880e-449d-b825-0626fc8e39ac
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:15.782883    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:15.782883    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:15.783202    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:16.283269    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:16.283348    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.283348    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.283348    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.287675    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:16.288157    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.288157    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.288157    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Audit-Id: 81a7a62c-92a3-490d-823a-f088fd1db0ae
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.288157    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.288332    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1881","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5582 chars]
	I0501 04:16:16.787615    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:16.787844    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.787844    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.787844    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.792509    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:16.792509    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.792581    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.792581    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Audit-Id: a8df7ba9-9350-4087-9759-fb183f00b90d
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.792581    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.793361    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:16.793939    4352 node_ready.go:49] node "multinode-289800" has status "Ready":"True"
	I0501 04:16:16.794032    4352 node_ready.go:38] duration metric: took 28.5209417s for node "multinode-289800" to be "Ready" ...
	I0501 04:16:16.794032    4352 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:16:16.794182    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:16:16.794182    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.794259    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.794259    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.799522    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:16.799522    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.799522    4352 round_trippers.go:580]     Audit-Id: 050004ab-c4e9-41e4-883e-0bd4c079851f
	I0501 04:16:16.799522    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.799522    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.799699    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.799699    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.799699    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.801485    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1931"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 94470 chars]
	I0501 04:16:16.806197    4352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:16.806390    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:16.806390    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.806390    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.806390    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.809394    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:16.809394    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Audit-Id: d260c6af-6d95-4e48-9d52-91351dfb04be
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.809394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.809394    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.809394    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.809836    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:16.810074    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:16.810074    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:16.810074    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:16.810074    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:16.812673    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:16.812673    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Audit-Id: c4b374c2-cf45-43e5-9ef0-306e176eb3a7
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:16.812673    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:16.812673    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:16.812673    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:16 GMT
	I0501 04:16:16.813955    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:17.321570    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:17.321570    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.321570    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.321570    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.326033    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:17.326033    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.326033    4352 round_trippers.go:580]     Audit-Id: e96b80cd-3246-46b2-a271-bc2d14e84fd0
	I0501 04:16:17.326033    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.326033    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.326033    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.326033    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.326209    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.326483    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:17.327085    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:17.327085    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.327085    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.327085    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.329700    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:17.329700    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Audit-Id: 81cab6cc-4d87-423d-a191-a4ca9c77fc54
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.329700    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.329700    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.329700    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.330792    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:17.821612    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:17.821612    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.821612    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.821612    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.826237    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:17.826237    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.826237    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Audit-Id: 7f10bf3d-33e1-4f53-8d5e-33711ac1d613
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.826237    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.826237    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.827351    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:17.828614    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:17.828738    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:17.828738    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:17.828738    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:17.830982    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:17.830982    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:17 GMT
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Audit-Id: aadd4245-2a21-4202-8516-976397b3fb2d
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:17.830982    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:17.830982    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:17.830982    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:17.832082    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:18.319020    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:18.319260    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.319260    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.319260    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.323644    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:18.324106    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.324106    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.324106    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.324106    4352 round_trippers.go:580]     Audit-Id: 75971dd8-801d-459d-88d7-dd2aeb442a01
	I0501 04:16:18.324624    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:18.325682    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:18.325682    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.325682    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.325682    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.327924    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:18.327924    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.327924    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Audit-Id: 2b56d2a4-2c48-447f-9704-3c924da491b7
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.327924    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.328396    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.328672    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:18.818530    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:18.818638    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.818638    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.818638    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.823046    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:18.823253    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Audit-Id: 3fd87977-3509-4dda-acbf-7ae284dd4856
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.823253    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.823253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.823253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.823426    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:18.824692    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:18.824780    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:18.824780    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:18.824904    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:18.827496    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:18.827496    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:18.827496    4352 round_trippers.go:580]     Audit-Id: dd40c323-2562-48e8-8197-4664adbd4df8
	I0501 04:16:18.827842    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:18.827842    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:18.827842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:18.827842    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:18.827842    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:18 GMT
	I0501 04:16:18.828056    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:18.828563    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:19.317101    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:19.317101    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.317101    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.317101    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.321854    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:19.321854    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.321854    4352 round_trippers.go:580]     Audit-Id: 7e27b50d-a553-4d82-b13e-1d7740c9eed7
	I0501 04:16:19.321854    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.321854    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.321854    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.321954    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.321954    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.322411    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:19.323130    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:19.323130    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.323130    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.323130    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.325696    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:19.326234    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.326234    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.326234    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.326234    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.326234    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.326234    4352 round_trippers.go:580]     Audit-Id: c87f3c5c-d4d9-466b-80a2-7d8b78d44be6
	I0501 04:16:19.326310    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.326517    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:19.817915    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:19.817915    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.818027    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.818027    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.822406    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:19.822867    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Audit-Id: 62b3dbf4-22ca-4aa7-b136-2dcdf632f3ac
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.822867    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.822867    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.822867    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.823142    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:19.823909    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:19.823970    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:19.823970    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:19.823970    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:19.827726    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:19.827726    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:19.827808    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:19 GMT
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Audit-Id: 1b11db93-1fae-44c9-913c-3571294123e7
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:19.827808    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:19.827808    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:19.828062    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:20.316534    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:20.316599    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.316599    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.316599    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.321644    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:20.321644    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.321644    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Audit-Id: d0b99343-4d7d-48c1-8987-c23e631244d1
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.321644    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.321644    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.321644    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:20.322670    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:20.322774    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.322774    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.322858    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.324540    4352 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 04:16:20.324540    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Audit-Id: 6d3fb35a-de54-4e1f-9490-c6e0afd9174c
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.324540    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.324540    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.324540    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.325987    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1931","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5359 chars]
	I0501 04:16:20.820332    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:20.820423    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.820493    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.820493    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.825309    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:20.825562    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Audit-Id: 7236753d-bc54-43e1-83d6-9db984fee0b8
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.825562    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.825562    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.825562    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.826234    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:20.827333    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:20.827333    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:20.827333    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:20.827392    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:20.832053    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:20.832221    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:20.832221    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:20 GMT
	I0501 04:16:20.832282    4352 round_trippers.go:580]     Audit-Id: 360afb6c-cdbb-41cb-8d3f-68a66a5a75c5
	I0501 04:16:20.832282    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:20.832308    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:20.832409    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:20.832409    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:20.833010    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:20.833503    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:21.309470    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:21.309470    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.309594    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.309594    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.314012    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:21.314012    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.314012    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.314012    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Audit-Id: bc524f9c-a27a-4d3a-bde8-9beaa844ba38
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.314012    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.315004    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:21.315761    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:21.315761    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.315761    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.315761    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.318991    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:21.318991    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.318991    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.318991    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Audit-Id: 91a4bbf4-5e2a-49f6-838b-46a40d8c7bfc
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.318991    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.319349    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:21.818554    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:21.818645    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.818645    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.818747    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.822201    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:21.822201    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Audit-Id: 2b50f61b-33da-43a2-b888-a27e780c5ba7
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.822201    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.822201    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.822201    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.823601    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:21.825197    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:21.825311    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:21.825311    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:21.825311    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:21.829579    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:21.829729    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:21.829729    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:21.829729    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:21 GMT
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Audit-Id: 9684aa08-a870-415d-9ae6-119943b415f9
	I0501 04:16:21.829729    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:21.830086    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:22.316747    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:22.316830    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.316830    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.316830    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.321248    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:22.321248    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Audit-Id: bd531d37-72d7-44a5-bd85-532f059df449
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.322032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.322032    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.322032    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.322269    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:22.323089    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:22.323089    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.323089    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.323089    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.326457    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:22.326457    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.326457    4352 round_trippers.go:580]     Audit-Id: 989e17c6-1130-4bf2-a662-763c489be260
	I0501 04:16:22.326457    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.326457    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.326457    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.326942    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.326942    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.327235    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:22.817837    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:22.817837    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.817837    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.817837    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.822494    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:22.822494    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.823021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.823021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Audit-Id: f05c0b08-a050-4049-a246-4cfe172b7f57
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.823021    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.823261    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:22.823923    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:22.824009    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:22.824044    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:22.824044    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:22.830075    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:22.830075    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Audit-Id: e2984fb0-6f1a-4dea-8864-65a0d8e7387f
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:22.830075    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:22.830075    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:22.830075    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:22 GMT
	I0501 04:16:22.830904    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:23.316589    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:23.316589    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.316589    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.316589    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.321200    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:23.321200    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Audit-Id: e31ed3e4-6009-484f-8cb0-428db34a53b7
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.321200    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.321200    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.321200    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.321200    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:23.321200    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:23.321200    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.322235    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.322235    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.324292    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:23.324917    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.324917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.324917    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Audit-Id: 283d4092-6d88-4fa0-be50-453173e356b9
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.324917    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.324917    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:23.325689    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:23.817186    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:23.817186    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.817186    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.817186    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.820839    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:23.820839    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.820839    4352 round_trippers.go:580]     Audit-Id: 0901036d-adcd-42a5-be63-926fac058393
	I0501 04:16:23.820839    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.820839    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.820839    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.820839    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.821820    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.822052    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:23.822855    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:23.822946    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:23.822946    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:23.822946    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:23.825849    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:23.825849    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:23.825849    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:23.825849    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:23.825849    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:23.825849    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:23.826023    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:23 GMT
	I0501 04:16:23.826023    4352 round_trippers.go:580]     Audit-Id: a9379409-b041-43c4-bc4c-b7b45eb4c291
	I0501 04:16:23.826384    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:24.319157    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:24.319157    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.319157    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.319157    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.323877    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:24.323877    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Audit-Id: 81285df0-dac6-42b0-af38-69145c972490
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.323877    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.323877    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.323877    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.324434    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:24.325607    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:24.325698    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.325698    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.325698    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.331084    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:24.331084    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.331084    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.331084    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Audit-Id: a70680e3-a313-4bf9-879f-624497c3c30e
	I0501 04:16:24.331084    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.331084    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:24.818740    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:24.818740    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.818740    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.818740    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.823254    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:24.823254    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.823254    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.823254    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.823254    4352 round_trippers.go:580]     Audit-Id: b778dd97-8699-43d7-89c5-e24aa8a65a07
	I0501 04:16:24.823254    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.823741    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.823741    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.823923    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:24.824668    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:24.824668    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:24.824668    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:24.824668    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:24.827979    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:24.827979    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:24.827979    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:24.827979    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:24 GMT
	I0501 04:16:24.827979    4352 round_trippers.go:580]     Audit-Id: 834cfe03-11e1-498f-a1e1-cf2da60cd7b6
	I0501 04:16:24.828162    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:24.828162    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:24.828162    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:24.828595    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:25.313778    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:25.313778    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.313778    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.313778    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.317405    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:25.318424    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.318424    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.318424    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Audit-Id: d7b1b757-2128-457d-952c-0a043f9d172f
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.318424    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.318655    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:25.319674    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:25.319754    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.319754    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.319754    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.323023    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:25.323023    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.323023    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.323023    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Audit-Id: a727c28e-0a2a-4f75-a39b-38db5a7147ef
	I0501 04:16:25.323023    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.323564    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:25.811432    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:25.811513    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.811513    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.811513    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.815503    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:25.815503    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.815503    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.815503    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Audit-Id: ebc60e7e-206d-4b9c-b3e5-308ae679b33d
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.815503    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.815772    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:25.816351    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:25.816509    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:25.816509    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:25.816509    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:25.818761    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:25.819255    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:25.819255    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:25 GMT
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Audit-Id: 083ee0c2-1a4e-49dd-a15b-72018a6364ce
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:25.819255    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:25.819255    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:25.819255    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:25.820137    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:26.314159    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:26.314159    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.314159    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.314159    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.318901    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:26.319002    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Audit-Id: 6155ae63-0951-4263-8b61-926605eb8751
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.319002    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.319002    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.319002    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.319338    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:26.320050    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:26.320130    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.320130    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.320130    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.322386    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:26.322386    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Audit-Id: fa069478-4099-4084-9493-3c0cb128ba57
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.322386    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.322386    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.322386    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.323623    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:26.814772    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:26.814772    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.814901    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.814901    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.818397    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:26.819266    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Audit-Id: 97e310c6-47bd-407f-aa4b-1e1292313dd3
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.819266    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.819266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.819266    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.819607    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:26.820918    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:26.820918    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:26.820918    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:26.820918    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:26.825448    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:26.825544    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Audit-Id: 37f0a0f3-37f6-43bc-953c-2560e8523f51
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:26.825544    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:26.825544    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:26.825544    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:26 GMT
	I0501 04:16:26.825912    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:27.313128    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:27.313227    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.313227    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.313227    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.317548    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:27.317649    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Audit-Id: f0a20432-f5b4-4b84-8a80-14530e7d80e7
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.317649    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.317649    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.317649    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.317932    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:27.318817    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:27.318817    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.318817    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.318817    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.328230    4352 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 04:16:27.328230    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.328230    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.328230    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Audit-Id: b4308ccc-0003-4194-a5b8-7412f1cac1f0
	I0501 04:16:27.328230    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.329251    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:27.813953    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:27.814066    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.814066    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.814066    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.818512    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:27.819041    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.819041    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Audit-Id: e50c69f8-ce3c-4e3d-8acd-4950a65f682b
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.819041    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.819041    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.819281    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:27.819878    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:27.819878    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:27.819878    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:27.819878    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:27.823068    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:27.823336    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:27 GMT
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Audit-Id: d818f2ab-838b-49c5-842c-d9fe922d6d76
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:27.823423    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:27.823423    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:27.823423    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:27.823751    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:27.824393    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:28.310861    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:28.310861    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.310861    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.310947    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.314248    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:28.314248    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Audit-Id: 9986cd0b-09e8-410c-bcb6-057ad45cee9d
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.314248    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.314248    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.314248    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.315078    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:28.315849    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:28.315907    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.315907    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.315907    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.318816    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:28.318886    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Audit-Id: ccdda6e2-5208-4eee-8f8d-16441b853e0e
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.318886    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.318886    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.318886    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.319398    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:28.812245    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:28.812370    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.812370    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.812370    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.817262    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:28.817536    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.817536    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.817536    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Audit-Id: 9d906dfe-1f9e-44fc-b517-cfd18e18f34f
	I0501 04:16:28.817536    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.818478    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:28.819006    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:28.819006    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:28.819006    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:28.819006    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:28.821610    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:28.821610    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:28.821610    4352 round_trippers.go:580]     Audit-Id: 81c2a6a1-5aa0-42c3-b12f-a9b1ba482460
	I0501 04:16:28.822670    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:28.822732    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:28.822774    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:28.822774    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:28.822774    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:28 GMT
	I0501 04:16:28.823100    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:29.313081    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:29.313081    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.313081    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.313081    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.318095    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:29.318095    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.318095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Audit-Id: 3d4fce96-bab5-43f8-9a3c-4a9bd918cd83
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.318095    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.318095    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.318095    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:29.319232    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:29.319232    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.319293    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.319293    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.333692    4352 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0501 04:16:29.333692    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Audit-Id: c00fe54a-08de-4a55-bb1d-beba5ea0bb34
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.333692    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.333692    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.333692    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.334132    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:29.811929    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:29.812038    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.812038    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.812038    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.816220    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:29.816220    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.816220    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Audit-Id: dd1f9929-d6d6-4aee-b394-03b8c7136961
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.816220    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.816220    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.817087    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:29.817789    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:29.818352    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:29.818352    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:29.818352    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:29.822112    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:29.822112    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Audit-Id: 4f7a9b1b-79a0-41e9-bf6e-a0096656d4d7
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:29.822112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:29.822112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:29.822112    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:29 GMT
	I0501 04:16:29.822352    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:30.306888    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:30.306888    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.307025    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.307025    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.310450    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:30.310884    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Audit-Id: 573fcf2e-208c-4a0d-8d79-7dd435a2e58b
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.310884    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.310884    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.310884    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.310884    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:30.311752    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:30.311752    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.311752    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.311752    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.314341    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:30.314341    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.315151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.315151    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Audit-Id: 27ce241c-6ddc-4ad0-9c6c-36bced236f74
	I0501 04:16:30.315151    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.315430    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:30.315795    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:30.815525    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:30.815749    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.815749    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.815749    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.819957    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:30.819957    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.819957    4352 round_trippers.go:580]     Audit-Id: be06eea0-bfdd-45e0-b3f6-df8bd6e26364
	I0501 04:16:30.819957    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.820547    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.820547    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.820547    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.820547    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.820763    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:30.821511    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:30.821576    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:30.821576    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:30.821576    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:30.823734    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:30.823734    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:30.823734    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:30.823734    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:30 GMT
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Audit-Id: 1c2077ae-ea8e-4e17-bb2a-3e60ef1cae35
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:30.823734    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:30.824624    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:31.312798    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:31.312798    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.312915    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.312915    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.317375    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:31.317723    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Audit-Id: 5a638a9d-67cd-4a80-819b-166467a4b708
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.317783    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.317783    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.317783    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.317783    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:31.319065    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:31.319153    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.319153    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.319153    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.322779    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:31.322779    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.322989    4352 round_trippers.go:580]     Audit-Id: c79c82d7-e839-42ac-8b7a-afc2771d7144
	I0501 04:16:31.323148    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.323215    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.323215    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.323215    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.323215    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.323215    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:31.810660    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:31.810660    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.810740    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.810740    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.816076    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:31.816076    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Audit-Id: b7d23653-018b-41af-a27c-18d0a21ea855
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.816323    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.816323    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.816323    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.816454    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:31.817213    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:31.817213    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:31.817213    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:31.817213    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:31.819563    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:31.819563    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:31.819563    4352 round_trippers.go:580]     Audit-Id: 969de7f0-62de-427a-81cb-00aa4bc2a125
	I0501 04:16:31.819563    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:31.819563    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:31.819563    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:31.820391    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:31.820391    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:31 GMT
	I0501 04:16:31.820707    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:32.309679    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:32.309900    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.309900    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.309900    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.313252    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:32.314134    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Audit-Id: 4cb96fc0-d7f4-4cf4-922a-be282b23755e
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.314134    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.314134    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.314134    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.314393    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:32.315125    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:32.315125    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.315125    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.315125    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.317978    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:32.318510    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.318510    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.318510    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.318510    4352 round_trippers.go:580]     Audit-Id: 98c6b91e-4d13-4caf-b4ff-9e28ac69c82f
	I0501 04:16:32.318510    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:32.319285    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:32.808649    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:32.808649    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.808649    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.808649    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.811244    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:32.811244    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.811244    4352 round_trippers.go:580]     Audit-Id: 23e0bd42-42c3-4abd-acfa-0ade72ff458a
	I0501 04:16:32.811244    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.812258    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.812258    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.812258    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.812258    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.812454    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:32.813325    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:32.813325    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:32.813325    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:32.813325    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:32.816066    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:32.816066    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:32.816066    4352 round_trippers.go:580]     Audit-Id: 4220837a-f802-471f-9909-fc23a4dcb1d8
	I0501 04:16:32.816066    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:32.816066    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:32.816066    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:32.816066    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:32.816970    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:32 GMT
	I0501 04:16:32.817237    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:33.307202    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:33.307202    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.307202    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.307425    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.311293    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:33.311532    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.311532    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.311532    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.311532    4352 round_trippers.go:580]     Audit-Id: b1bacc20-cd02-4561-b2ff-bec3c53496a0
	I0501 04:16:33.311611    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.311611    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.311611    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.311803    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:33.312605    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:33.312605    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.312605    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.312682    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.314336    4352 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 04:16:33.315143    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.315143    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.315210    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.315210    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.315210    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.315210    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.315210    4352 round_trippers.go:580]     Audit-Id: 06295fb7-9662-46f1-b0de-dd0404d5f802
	I0501 04:16:33.315678    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:33.820257    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:33.820257    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.820257    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.820257    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.823821    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:33.823821    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.823821    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.823821    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.823821    4352 round_trippers.go:580]     Audit-Id: f06ef77f-f188-4e42-a1ed-0433c4bdc5d4
	I0501 04:16:33.823821    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.824912    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.824912    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.825732    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:33.825893    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:33.825893    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:33.825893    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:33.825893    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:33.829697    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:33.829697    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:33.829697    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:33.829697    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:33.829697    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:33.830195    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:33 GMT
	I0501 04:16:33.830195    4352 round_trippers.go:580]     Audit-Id: 7c3e4a49-a523-4783-9551-453d8888aa4e
	I0501 04:16:33.830195    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:33.830258    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:34.320389    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:34.320619    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.320619    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.320619    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.323893    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:34.324583    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.324583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.324583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.324583    4352 round_trippers.go:580]     Audit-Id: d9ec8939-2ee9-45e1-83f9-b16aa96e9726
	I0501 04:16:34.324855    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:34.325519    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:34.325519    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.325519    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.325519    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.329186    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:34.329186    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.329186    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.329186    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.329186    4352 round_trippers.go:580]     Audit-Id: fb013243-8426-4315-a9cb-0dde6493d16c
	I0501 04:16:34.329186    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:34.329814    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:34.820408    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:34.820408    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.820408    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.820408    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.824819    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:34.824819    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Audit-Id: 58dd9c85-13b3-48e1-a6ab-02566d767ab0
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.825317    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.825317    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.825317    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.825507    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:34.826342    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:34.826342    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:34.826342    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:34.826342    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:34.829594    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:34.829594    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Audit-Id: 4c288c98-57f1-488b-a370-4af881430ca8
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:34.829594    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:34.829594    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:34.829594    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:34 GMT
	I0501 04:16:34.830723    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:35.307080    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:35.307080    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.307370    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.307370    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.312613    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:35.312613    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.312893    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.312893    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Audit-Id: 18383b8d-b903-47ea-aa0a-6481b861c5fe
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.312893    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.313143    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:35.313983    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:35.313983    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.313983    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.313983    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.316962    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:35.317260    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Audit-Id: b87773c6-17de-4dc4-9f96-1d8e9ffdff64
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.317260    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.317260    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.317260    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.318272    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:35.820933    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:35.820933    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.820933    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.820933    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.824802    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:35.824802    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Audit-Id: 1b38b88e-0295-4a87-b7eb-e3dc709abb80
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.825812    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.825812    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.825812    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.826149    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:35.826461    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:35.826461    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:35.826461    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:35.826461    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:35.831246    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:35.831246    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:35.831246    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:35 GMT
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Audit-Id: 447a3977-f969-494f-8de2-0f19cc116af2
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:35.831246    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:35.831246    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:35.831966    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:36.307267    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:36.307442    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.307442    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.307442    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.312159    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:36.312159    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.312159    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.312159    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Audit-Id: 2c7f2e9f-891e-44c6-84c8-15d6ab08f4d5
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.312159    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.312159    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:36.313513    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:36.313513    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.313578    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.313578    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.316314    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:36.316314    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.316314    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.316314    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.316314    4352 round_trippers.go:580]     Audit-Id: b66202a8-fae1-47a4-a9dd-67a5872b5a63
	I0501 04:16:36.317462    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:36.810561    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:36.810683    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.810683    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.810683    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.817590    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:36.817590    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.817665    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.817665    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Audit-Id: c01ada7e-5e8c-41fa-839e-9883969bf6c4
	I0501 04:16:36.817665    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.818212    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:36.819170    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:36.819170    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:36.819170    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:36.819170    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:36.824401    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:36.824401    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:36.824401    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:36 GMT
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Audit-Id: 2d981cab-3c10-4df2-9a3d-44873e837195
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:36.824401    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:36.824401    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:36.825832    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:36.825867    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:37.312365    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:37.312635    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.312635    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.312635    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.316962    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:37.317253    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Audit-Id: 27e287d8-df29-4bcb-874d-59dd127f1e1c
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.317253    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.317253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.317253    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.317484    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:37.318195    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:37.318195    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.318195    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.318195    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.323974    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:37.323974    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.323974    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.323974    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.323974    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.324128    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.324128    4352 round_trippers.go:580]     Audit-Id: 0de19efb-b341-4ea6-b483-dcda9d658a0f
	I0501 04:16:37.324128    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.324885    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:37.815618    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:37.815618    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.815739    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.815739    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.821154    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:37.821154    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.821154    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.821154    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.821154    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.821154    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.821154    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.821670    4352 round_trippers.go:580]     Audit-Id: 6fd6ae4b-bc8e-4bcc-82bb-850115e1fbd8
	I0501 04:16:37.821979    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:37.822740    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:37.822818    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:37.822818    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:37.822818    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:37.825565    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:37.825565    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:37.825565    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:37.825565    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:37 GMT
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Audit-Id: 78890d41-1331-4d5c-bcd2-561fd0335438
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:37.825565    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:37.826550    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:38.315480    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:38.315814    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.315814    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.315814    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.319859    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:38.319859    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Audit-Id: 013133a8-bce9-4944-b43f-60d4a32d9cd6
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.319859    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.319859    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.319859    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.321292    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:38.321999    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:38.322060    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.322060    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.322060    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.324758    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:38.324758    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Audit-Id: b13f298f-dd89-48d8-8f25-35814342a5b7
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.324758    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.324758    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.324758    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.325476    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:38.815382    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:38.815382    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.815382    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.815467    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.819174    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:38.819174    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Audit-Id: 739af986-8e7a-412c-ae36-8d0c22198a26
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.819174    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.819174    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.819174    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.820494    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:38.821336    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:38.821450    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:38.821450    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:38.821450    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:38.826874    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:38.826874    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:38.826874    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:38 GMT
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Audit-Id: 10634919-4852-4be2-aa4e-ef82afd68924
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:38.826874    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:38.826874    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:38.826874    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:38.827615    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:39.315177    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:39.315177    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.315177    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.315177    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.319879    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:39.319879    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.319879    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.319879    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.319879    4352 round_trippers.go:580]     Audit-Id: 35c9e661-a6f7-4817-9714-70ab2e75b894
	I0501 04:16:39.320767    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.320767    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.320767    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.321008    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:39.321885    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:39.321885    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.321885    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.321885    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.325248    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:39.325248    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.325248    4352 round_trippers.go:580]     Audit-Id: 08640a8a-a836-401e-b9a4-48ab4ddb050e
	I0501 04:16:39.325248    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.325721    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.325721    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.325721    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.325721    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.325791    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:39.815850    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:39.816005    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.816005    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.816005    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.821462    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:39.821462    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.821462    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.821462    4352 round_trippers.go:580]     Audit-Id: 0bd931a6-6870-424e-9a62-342e10f92b01
	I0501 04:16:39.821462    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.822458    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.822458    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.822481    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.823098    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:39.824385    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:39.824385    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:39.824385    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:39.824458    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:39.827261    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:39.827261    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:39.827261    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:39 GMT
	I0501 04:16:39.827261    4352 round_trippers.go:580]     Audit-Id: ed84fb05-7161-4a46-8b48-db2af233d62d
	I0501 04:16:39.827261    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:39.827667    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:39.827667    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:39.827667    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:39.828048    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:40.314215    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:40.314324    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.314324    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.314324    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.318700    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:40.318784    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.318784    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.318784    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Audit-Id: 098ff21d-9149-4af3-a15f-e01c4b362553
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.318784    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.318971    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:40.319686    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:40.319686    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.319686    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.319686    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.322867    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:40.323838    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.323838    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.323909    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.323909    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.323909    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.323909    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.323909    4352 round_trippers.go:580]     Audit-Id: 9754cb84-13ee-4b31-8041-801f08cd591d
	I0501 04:16:40.324176    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:40.813790    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:40.813790    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.813790    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.813790    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.819977    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:40.819977    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.819977    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.819977    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.819977    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.819977    4352 round_trippers.go:580]     Audit-Id: 63767237-36ee-4e9a-a476-31383824a40c
	I0501 04:16:40.819977    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.820579    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.821357    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:40.822348    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:40.822348    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:40.822348    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:40.822348    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:40.826243    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:40.826243    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:40 GMT
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Audit-Id: 5cb431a9-4faa-4c61-9777-c21172a8876d
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:40.826243    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:40.826243    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:40.826243    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:40.826243    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:41.308554    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:41.308554    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.308554    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.308554    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.313203    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:41.313357    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Audit-Id: 85d0f24d-e02a-4579-80f1-d0622cb1437c
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.313357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.313357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.313357    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.313533    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:41.314156    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:41.314337    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.314337    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.314337    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.317487    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:41.317689    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Audit-Id: 59cc71cd-ea44-4838-a1fd-9950669ff826
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.317689    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.317689    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.317689    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.317823    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:41.318592    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:41.808173    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:41.808322    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.808464    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.808464    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.815045    4352 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0501 04:16:41.815112    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.815112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.815112    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Audit-Id: 52ea6c9c-008b-4934-9b48-a5c1f3687391
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.815112    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.815414    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:41.816084    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:41.816084    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:41.816084    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:41.816084    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:41.819697    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:41.819697    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:41.819697    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:41 GMT
	I0501 04:16:41.819697    4352 round_trippers.go:580]     Audit-Id: c5d4e948-4cb1-4617-9286-d4f30655d689
	I0501 04:16:41.819697    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:41.819881    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:41.819881    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:41.819881    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:41.820104    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:42.321866    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:42.321866    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.321866    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.321866    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.326244    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:42.326244    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.326244    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.326244    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.326244    4352 round_trippers.go:580]     Audit-Id: 0e8063e0-0aa4-4965-970a-b0b4c167ede3
	I0501 04:16:42.326244    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:42.327345    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:42.327345    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.327345    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.327345    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.330059    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:42.330059    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.330059    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.330059    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.330059    4352 round_trippers.go:580]     Audit-Id: 7d52bd0c-3841-4d55-9a03-f8431dcca877
	I0501 04:16:42.330059    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:42.820689    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:42.820689    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.820689    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.820689    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.825639    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:42.825639    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Audit-Id: 3002f998-f3c1-433a-8032-bbd621a3f77e
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.825639    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.825639    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.825639    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.825639    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:42.826639    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:42.827163    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:42.827163    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:42.827403    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:42.831087    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:42.831087    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:42.831087    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:42 GMT
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Audit-Id: 513691f1-cdec-4433-9a9d-b7f8f3be5898
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:42.831935    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:42.831935    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:42.832311    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:43.319181    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:43.319181    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.319296    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.319296    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.323198    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:43.323351    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.323351    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Audit-Id: 2a706883-01e4-4692-8f8b-32c9ba64a60b
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.323351    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.323351    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.323592    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:43.324179    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:43.324290    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.324290    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.324290    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.327599    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:43.327599    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.327599    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.327599    4352 round_trippers.go:580]     Audit-Id: af624360-28f1-453f-aa3c-401d700a0a93
	I0501 04:16:43.328015    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.328015    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.328015    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.328015    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.328109    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:43.328109    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:43.817904    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:43.817904    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.817904    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.818132    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.822650    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:43.822741    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.822804    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.822804    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Audit-Id: ddb84727-2e00-48f6-8ffa-79ba8e4791d5
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.822804    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.823028    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:43.823996    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:43.823996    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:43.823996    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:43.823996    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:43.829181    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:43.829181    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:43.829181    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:43.829181    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:43 GMT
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Audit-Id: 417d7059-96ba-4f56-a209-9f6a16f69b4e
	I0501 04:16:43.829181    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:43.829925    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:44.318312    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:44.318504    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.318504    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.318504    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.322091    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:44.322668    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.322668    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.322668    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Audit-Id: 12fc0bb8-c5b5-4443-9133-5b1663c3f1b7
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.322668    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.323677    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:44.324689    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:44.324769    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.324769    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.324769    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.326986    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:44.326986    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Audit-Id: 464d7361-3b96-4818-8877-f0104f516ffd
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.326986    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.326986    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.326986    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.328360    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:44.817570    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:44.817570    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.817570    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.817570    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.823563    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:44.823818    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.823818    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.823818    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Audit-Id: 210e9e98-d6d0-4330-8815-3a650144cfa1
	I0501 04:16:44.823818    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.823818    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:44.824850    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:44.824850    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:44.824850    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:44.824850    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:44.828514    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:44.828514    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:44.828514    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:44.828514    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:44 GMT
	I0501 04:16:44.828514    4352 round_trippers.go:580]     Audit-Id: 77df7147-bfe8-4082-92ea-03b366483db2
	I0501 04:16:44.829519    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:45.320005    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:45.320005    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.320005    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.320005    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.325804    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:45.326175    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.326264    4352 round_trippers.go:580]     Audit-Id: aa7771bc-11b1-473d-aeb7-178f186416cc
	I0501 04:16:45.326264    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.326315    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.326315    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.326315    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.326315    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.326315    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:45.327058    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:45.327058    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.327058    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.327058    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.330661    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:45.331046    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.331125    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.331125    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.331236    4352 round_trippers.go:580]     Audit-Id: a4164826-1b6e-4b46-b36d-ef015a8cd88d
	I0501 04:16:45.331410    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.331410    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.331410    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.331479    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:45.332241    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:45.820161    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:45.820339    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.820339    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.820339    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.825275    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:45.825362    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Audit-Id: 647f5b17-4ce0-4a16-aed5-046c5f3c5e3a
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.825362    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.825362    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.825362    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.826467    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:45.827122    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:45.827122    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:45.827122    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:45.827122    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:45.830762    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:45.830762    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:45.831094    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:45.831094    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:45.831094    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:45 GMT
	I0501 04:16:45.831150    4352 round_trippers.go:580]     Audit-Id: 6b20194f-307a-40f0-abc0-0d907b959926
	I0501 04:16:45.831150    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:45.831150    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:45.831346    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:46.308848    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:46.308848    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.308848    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.309050    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.316225    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:16:46.316225    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Audit-Id: 1aea1e84-6f10-4741-8c1e-80b01887d3f3
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.316225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.316225    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.316225    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.316225    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:46.317459    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:46.317512    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.317512    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.317512    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.320811    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:46.320966    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.320966    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.320966    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.320966    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.320966    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.321050    4352 round_trippers.go:580]     Audit-Id: 0ff294d3-ecc4-4031-b144-7884868291a8
	I0501 04:16:46.321050    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.321318    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:46.820126    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:46.820126    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.820342    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.820342    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.825098    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:46.825161    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.825161    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.825161    4352 round_trippers.go:580]     Audit-Id: e5185d22-df2e-41fd-9371-0e5a8e2310c2
	I0501 04:16:46.825161    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.825232    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.825232    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.825232    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.825389    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:46.826213    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:46.826213    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:46.826273    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:46.826273    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:46.831078    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:46.831835    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:46.831835    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:46 GMT
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Audit-Id: 19165879-fa5c-4ca0-ac9e-bea727409296
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:46.831835    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:46.831911    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:46.832126    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:47.308311    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:47.308375    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.308438    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.308499    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.312357    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:47.312357    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.312357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.312357    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.312357    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.312357    4352 round_trippers.go:580]     Audit-Id: ab29da2f-90dc-4b15-a300-60e603bb44fd
	I0501 04:16:47.312357    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.312876    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.313062    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:47.313652    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:47.313652    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.313652    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.313652    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.316289    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:47.316289    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.316289    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.316289    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Audit-Id: 20a36e96-a5ae-44f8-bea6-de011ecd7041
	I0501 04:16:47.316289    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.317483    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:47.815469    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:47.815533    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.815533    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.815533    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.824952    4352 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0501 04:16:47.824952    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.824952    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.824952    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.824952    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.825233    4352 round_trippers.go:580]     Audit-Id: fc54fbbd-551a-40a4-bdf6-4990be9879d0
	I0501 04:16:47.825233    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.825233    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.825442    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:47.826202    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:47.826202    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:47.826264    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:47.826264    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:47.831050    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:47.831050    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Audit-Id: 0ec61068-d881-4fae-a1f6-a3c0ea65f3b9
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:47.831050    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:47.831050    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:47.831050    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:47 GMT
	I0501 04:16:47.832043    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:47.832043    4352 pod_ready.go:102] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"False"
	I0501 04:16:48.309930    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:48.310166    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.310166    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.310166    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.313823    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:48.313823    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.313823    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.313823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.313823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.314183    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.314183    4352 round_trippers.go:580]     Audit-Id: 74ee7c3f-466a-4357-8bc5-08168ccfca95
	I0501 04:16:48.314183    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.314399    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:48.315062    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:48.315062    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.315062    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.315062    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.320713    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:48.320713    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.320713    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.320713    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.320713    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.320713    4352 round_trippers.go:580]     Audit-Id: fb8a94c9-10d0-4be4-82f4-1cdff8d0aafc
	I0501 04:16:48.321145    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.321145    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.321403    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:48.816008    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:48.816008    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.816008    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.816008    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.820624    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:48.820624    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.820624    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.820624    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.820624    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.821307    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.821307    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.821307    4352 round_trippers.go:580]     Audit-Id: 6650ae99-ce6b-4a01-8848-7fa28f69f5c2
	I0501 04:16:48.821574    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1798","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0501 04:16:48.822492    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:48.822569    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:48.822569    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:48.822569    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:48.826741    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:48.826741    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:48.826741    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:48.826741    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:48 GMT
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Audit-Id: 0221ed10-22a2-4f86-a0c9-9fa755095823
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:48.826741    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:48.828246    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.321034    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-8w9hq
	I0501 04:16:49.321034    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.321034    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.321034    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.324469    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.325245    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Audit-Id: 2827705c-c665-449b-af3c-da67511d2506
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.325245    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.325245    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.325245    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.325906    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1973","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0501 04:16:49.326765    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.326765    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.326765    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.326765    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.329347    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.329347    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Audit-Id: 516142ff-e58d-4e2e-8fb0-340127a3b761
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.330007    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.330007    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.330007    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.330307    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.330657    4352 pod_ready.go:92] pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.330815    4352 pod_ready.go:81] duration metric: took 32.5243737s for pod "coredns-7db6d8ff4d-8w9hq" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.330815    4352 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.330932    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-x9zrw
	I0501 04:16:49.330932    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.330984    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.330984    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.338153    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:16:49.338153    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.338153    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Audit-Id: fdd5b4ff-00f3-41fa-9f54-7de75e884cbf
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.338153    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.338153    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.338775    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-x9zrw","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0b91b14d-bed3-4889-b193-db53daccd395","resourceVersion":"1980","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0501 04:16:49.338853    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.338853    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.338853    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.338853    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.342177    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.342177    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.342177    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.342177    4352 round_trippers.go:580]     Audit-Id: c19cfc68-2d1e-457f-8a84-2bd7acb1bde6
	I0501 04:16:49.342177    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.342262    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.342262    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.342262    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.342651    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.343297    4352 pod_ready.go:92] pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.343297    4352 pod_ready.go:81] duration metric: took 12.4822ms for pod "coredns-7db6d8ff4d-x9zrw" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.343297    4352 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.343297    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-289800
	I0501 04:16:49.343297    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.343297    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.343297    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.347152    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.347152    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.347152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.347152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Audit-Id: 7ffb1de5-6949-49b9-8f16-0e18ce9bcaa4
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.347152    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.347746    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-289800","namespace":"kube-system","uid":"aaf534b6-9f4c-445d-afb9-bd225e1a77fd","resourceVersion":"1847","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.209.199:2379","kubernetes.io/config.hash":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.mirror":"b12e9024402f49cfac7440d6a2eaf42d","kubernetes.io/config.seen":"2024-05-01T04:15:36.949387188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0501 04:16:49.348320    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.348320    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.348320    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.348320    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.352033    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.352033    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.352033    4352 round_trippers.go:580]     Audit-Id: c215dc0b-3a6e-4cea-bd2a-5f9b94be5f30
	I0501 04:16:49.352033    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.352310    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.352310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.352310    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.352310    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.352430    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.353029    4352 pod_ready.go:92] pod "etcd-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.353029    4352 pod_ready.go:81] duration metric: took 9.7319ms for pod "etcd-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.353029    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.353029    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-289800
	I0501 04:16:49.353029    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.353029    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.353029    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.357659    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:49.357659    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Audit-Id: b6fbfba9-c32d-4b60-bf5b-da27cbc662c7
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.357659    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.357659    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.357659    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.358017    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-289800","namespace":"kube-system","uid":"0ee77673-e4b3-4fba-a855-ef6876337257","resourceVersion":"1869","creationTimestamp":"2024-05-01T04:15:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.209.199:8443","kubernetes.io/config.hash":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.mirror":"8b70cd8d31103a1cfca45e9856766786","kubernetes.io/config.seen":"2024-05-01T04:15:36.865099961Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:15:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0501 04:16:49.358796    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.358796    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.358880    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.358880    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.361667    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.361667    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.361667    4352 round_trippers.go:580]     Audit-Id: ed5e001c-d640-4349-8945-58c4c6ba5b0e
	I0501 04:16:49.361667    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.361920    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.361920    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.361920    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.361920    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.361920    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.361920    4352 pod_ready.go:92] pod "kube-apiserver-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.361920    4352 pod_ready.go:81] duration metric: took 8.8909ms for pod "kube-apiserver-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.361920    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.361920    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-289800
	I0501 04:16:49.361920    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.361920    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.361920    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.364649    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.365660    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.365660    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.365660    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.365660    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.365660    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.365737    4352 round_trippers.go:580]     Audit-Id: 8646db6a-9c0a-43b7-a07e-1216025e6d77
	I0501 04:16:49.365737    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.366135    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-289800","namespace":"kube-system","uid":"fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318","resourceVersion":"1851","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.mirror":"a17001fd2508d58fea9b1ae465b65254","kubernetes.io/config.seen":"2024-05-01T03:52:15.688763845Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0501 04:16:49.366804    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.366804    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.366804    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.366865    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.369511    4352 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0501 04:16:49.369511    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.369511    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.369511    4352 round_trippers.go:580]     Audit-Id: ced06db6-05fd-4fa1-b25d-1a2b3ee345de
	I0501 04:16:49.369823    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.369823    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.369823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.369823    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.370161    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.370603    4352 pod_ready.go:92] pod "kube-controller-manager-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.370651    4352 pod_ready.go:81] duration metric: took 8.7312ms for pod "kube-controller-manager-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.370651    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.524357    4352 request.go:629] Waited for 153.4057ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:16:49.524480    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bp9zx
	I0501 04:16:49.524480    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.524480    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.524480    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.528150    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:49.528150    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.528150    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.528150    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Audit-Id: 448f78a3-3ad6-4831-b469-33fd74811230
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.528150    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.529102    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bp9zx","generateName":"kube-proxy-","namespace":"kube-system","uid":"aba82e50-b8f8-40b4-b08a-6d045314d6b6","resourceVersion":"1834","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0501 04:16:49.726350    4352 request.go:629] Waited for 196.4559ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.726350    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:49.726350    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.726350    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.726350    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.731133    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:49.731133    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Audit-Id: da624bcc-5370-43bc-9483-bce41ae6ad1d
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.731133    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.731133    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.731133    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.731776    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:49.731776    4352 pod_ready.go:92] pod "kube-proxy-bp9zx" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:49.732330    4352 pod_ready.go:81] duration metric: took 361.1218ms for pod "kube-proxy-bp9zx" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.732330    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:49.929639    4352 request.go:629] Waited for 197.0521ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:16:49.929844    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g8mbm
	I0501 04:16:49.929844    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:49.929907    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:49.929929    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:49.934273    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:49.934273    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:49.934273    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:49.934686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:49.934686    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:49.934686    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:49 GMT
	I0501 04:16:49.934686    4352 round_trippers.go:580]     Audit-Id: b1b182d0-ac4d-416b-8348-8854216aeac0
	I0501 04:16:49.934686    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:49.935287    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g8mbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"ef0e1817-6682-4b8f-affa-c10021247006","resourceVersion":"1723","creationTimestamp":"2024-05-01T04:00:13Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:00:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0501 04:16:50.130596    4352 request.go:629] Waited for 194.3651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:16:50.130692    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m03
	I0501 04:16:50.130692    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.130692    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.130692    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.135295    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:50.135295    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.135295    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.135295    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Audit-Id: c18cd5b5-567b-46e6-a05c-1003a8919fae
	I0501 04:16:50.135295    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.135426    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m03","uid":"851df850-b222-4fa2-aca7-3694c4d89ab5","resourceVersion":"1905","creationTimestamp":"2024-05-01T04:11:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T04_11_04_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T04:11:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0501 04:16:50.135964    4352 pod_ready.go:97] node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:16:50.136013    4352 pod_ready.go:81] duration metric: took 403.6799ms for pod "kube-proxy-g8mbm" in "kube-system" namespace to be "Ready" ...
	E0501 04:16:50.136013    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800-m03" hosting pod "kube-proxy-g8mbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m03" has status "Ready":"Unknown"
	I0501 04:16:50.136013    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:50.334046    4352 request.go:629] Waited for 197.6293ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:16:50.334137    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rlzp8
	I0501 04:16:50.334137    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.334292    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.334292    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.337674    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:50.337674    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.337674    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Audit-Id: 0a238c3a-6896-4a17-8f27-02c106c4e45b
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.337674    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.337674    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.338638    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rlzp8","generateName":"kube-proxy-","namespace":"kube-system","uid":"b37d8d5d-a7cb-4848-a8a2-11d9761e08d6","resourceVersion":"1957","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"342b26dc-6828-4478-b155-fee8821fc15e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"342b26dc-6828-4478-b155-fee8821fc15e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0501 04:16:50.535485    4352 request.go:629] Waited for 195.8126ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:16:50.535603    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800-m02
	I0501 04:16:50.535603    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.535603    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.535603    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.539440    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:16:50.539669    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Audit-Id: de58e7b6-2272-48cc-80c4-c7bf12d53af9
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.539669    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.539669    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.539669    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.540066    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800-m02","uid":"9e630c9e-9cc6-42af-89de-135fca044670","resourceVersion":"1961","creationTimestamp":"2024-05-01T03:55:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_01T03_55_27_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:55:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0501 04:16:50.541332    4352 pod_ready.go:97] node "multinode-289800-m02" hosting pod "kube-proxy-rlzp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m02" has status "Ready":"Unknown"
	I0501 04:16:50.541332    4352 pod_ready.go:81] duration metric: took 405.316ms for pod "kube-proxy-rlzp8" in "kube-system" namespace to be "Ready" ...
	E0501 04:16:50.541332    4352 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-289800-m02" hosting pod "kube-proxy-rlzp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-289800-m02" has status "Ready":"Unknown"
	I0501 04:16:50.541332    4352 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:50.721420    4352 request.go:629] Waited for 179.9116ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:16:50.721781    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-289800
	I0501 04:16:50.722009    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.722054    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.722093    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.727766    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:16:50.727766    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Audit-Id: dab53051-f4be-4d88-b09d-de99470205d1
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.727766    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.727766    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.727766    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.727766    4352 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-289800","namespace":"kube-system","uid":"c7518f03-993b-432f-b742-8805dd2167a7","resourceVersion":"1859","creationTimestamp":"2024-05-01T03:52:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.mirror":"44d7830a7c97b8c7e460c0508d02be4e","kubernetes.io/config.seen":"2024-05-01T03:52:15.688771544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0501 04:16:50.921262    4352 request.go:629] Waited for 192.5213ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:50.921547    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes/multinode-289800
	I0501 04:16:50.921635    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:50.921635    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:50.921635    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:50.926030    4352 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0501 04:16:50.926030    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:50.926030    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:50.926030    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:50 GMT
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Audit-Id: ba356631-09f9-4fbd-ac9c-00af14bd5065
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:50.926287    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:50.926531    4352 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-01T03:52:12Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0501 04:16:50.927168    4352 pod_ready.go:92] pod "kube-scheduler-multinode-289800" in "kube-system" namespace has status "Ready":"True"
	I0501 04:16:50.927168    4352 pod_ready.go:81] duration metric: took 385.8333ms for pod "kube-scheduler-multinode-289800" in "kube-system" namespace to be "Ready" ...
	I0501 04:16:50.927168    4352 pod_ready.go:38] duration metric: took 34.1328801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 04:16:50.927168    4352 api_server.go:52] waiting for apiserver process to appear ...
	I0501 04:16:50.938181    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0501 04:16:50.965048    4352 command_runner.go:130] > 18cd30f3ad28
	I0501 04:16:50.965141    4352 logs.go:276] 1 containers: [18cd30f3ad28]
	I0501 04:16:50.978908    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0501 04:16:51.004860    4352 command_runner.go:130] > 34892fdb6898
	I0501 04:16:51.005091    4352 logs.go:276] 1 containers: [34892fdb6898]
	I0501 04:16:51.017307    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0501 04:16:51.044094    4352 command_runner.go:130] > b8a9b405d76b
	I0501 04:16:51.044170    4352 command_runner.go:130] > 8a0208aeafcf
	I0501 04:16:51.044170    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:16:51.044170    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:16:51.044170    4352 logs.go:276] 4 containers: [b8a9b405d76b 8a0208aeafcf 15c4496e3a9f 3e8d5ff9a9e4]
	I0501 04:16:51.055977    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0501 04:16:51.080738    4352 command_runner.go:130] > eaf69fce5ee3
	I0501 04:16:51.080738    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:16:51.080738    4352 logs.go:276] 2 containers: [eaf69fce5ee3 06f1f84bfde1]
	I0501 04:16:51.090727    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0501 04:16:51.117757    4352 command_runner.go:130] > 3efcc92f817e
	I0501 04:16:51.117757    4352 command_runner.go:130] > 502684407b0c
	I0501 04:16:51.117757    4352 logs.go:276] 2 containers: [3efcc92f817e 502684407b0c]
	I0501 04:16:51.130211    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0501 04:16:51.160199    4352 command_runner.go:130] > 66a1b89e6733
	I0501 04:16:51.160199    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:16:51.160199    4352 logs.go:276] 2 containers: [66a1b89e6733 4b62556f40be]
	I0501 04:16:51.173257    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0501 04:16:51.199011    4352 command_runner.go:130] > b7cae3f6b88b
	I0501 04:16:51.199121    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:16:51.199121    4352 logs.go:276] 2 containers: [b7cae3f6b88b 6d5f881ef398]
	I0501 04:16:51.199121    4352 logs.go:123] Gathering logs for etcd [34892fdb6898] ...
	I0501 04:16:51.199121    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34892fdb6898"
	I0501 04:16:51.230530    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.997417Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:51.231068    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998475Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.209.199:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.209.199:2380","--initial-cluster=multinode-289800=https://172.28.209.199:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.209.199:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.209.199:2380","--name=multinode-289800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0501 04:16:51.231127    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998558Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0501 04:16:51.231178    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.998588Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:51.231178    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998599Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.209.199:2380"]}
	I0501 04:16:51.231251    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998626Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:51.231305    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.006405Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"]}
	I0501 04:16:51.231410    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.007658Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-289800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0501 04:16:51.231410    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.030589Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.951987ms"}
	I0501 04:16:51.231476    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.081537Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0501 04:16:51.231476    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","commit-index":2020}
	I0501 04:16:51.231542    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=()"}
	I0501 04:16:51.231542    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became follower at term 2"}
	I0501 04:16:51.231542    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fe483b81e7b7d166 [peers: [], term: 2, commit: 2020, applied: 0, lastindex: 2020, lastterm: 2]"}
	I0501 04:16:51.231605    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:39.121672Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0501 04:16:51.231605    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.127575Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1352}
	I0501 04:16:51.231605    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.132217Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1744}
	I0501 04:16:51.231675    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.144206Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.15993Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"fe483b81e7b7d166","timeout":"7s"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"fe483b81e7b7d166"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160545Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fe483b81e7b7d166","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.16402Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0501 04:16:51.231724    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.165851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0501 04:16:51.231819    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0501 04:16:51.231819    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0501 04:16:51.231819    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	I0501 04:16:51.231886    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	I0501 04:16:51.231928    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	I0501 04:16:51.231950    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0501 04:16:51.231994    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:51.232051    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0501 04:16:51.232051    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0501 04:16:51.232114    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	I0501 04:16:51.232114    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	I0501 04:16:51.232114    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	I0501 04:16:51.232180    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	I0501 04:16:51.232180    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	I0501 04:16:51.232180    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	I0501 04:16:51.232246    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	I0501 04:16:51.232246    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	I0501 04:16:51.232304    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	I0501 04:16:51.232304    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	I0501 04:16:51.232304    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:51.232444    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:51.232484    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	I0501 04:16:51.232531    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0501 04:16:51.232531    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0501 04:16:51.232589    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0501 04:16:51.245319    4352 logs.go:123] Gathering logs for coredns [b8a9b405d76b] ...
	I0501 04:16:51.245412    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a9b405d76b"
	I0501 04:16:51.275786    4352 command_runner.go:130] > .:53
	I0501 04:16:51.275786    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:51.275786    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:51.275786    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:51.275786    4352 command_runner.go:130] > [INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	I0501 04:16:51.275786    4352 logs.go:123] Gathering logs for kube-proxy [3efcc92f817e] ...
	I0501 04:16:51.275786    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efcc92f817e"
	I0501 04:16:51.303222    4352 command_runner.go:130] ! I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:51.303222    4352 command_runner.go:130] ! I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:16:51.303222    4352 command_runner.go:130] ! I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:51.303803    4352 command_runner.go:130] ! I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:51.303803    4352 command_runner.go:130] ! I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:51.303856    4352 command_runner.go:130] ! I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:51.303880    4352 command_runner.go:130] ! I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:51.303880    4352 command_runner.go:130] ! I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.303923    4352 command_runner.go:130] ! I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:16:51.303923    4352 command_runner.go:130] ! I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:51.303982    4352 command_runner.go:130] ! I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:51.303982    4352 command_runner.go:130] ! I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:51.304046    4352 command_runner.go:130] ! I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:16:51.304046    4352 command_runner.go:130] ! I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:51.304046    4352 command_runner.go:130] ! I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:51.304102    4352 command_runner.go:130] ! I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:51.304102    4352 command_runner.go:130] ! I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:51.306103    4352 logs.go:123] Gathering logs for Docker ...
	I0501 04:16:51.306223    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0501 04:16:51.346738    4352 command_runner.go:130] > May 01 04:14:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.346817    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.346817    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.346817    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.346972    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:51.347083    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.347211    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.347283    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.347340    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.347340    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.347415    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:51.347496    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:51.347540    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.347540    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.347581    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.347599    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.347675    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.347867    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:51.347941    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:51.348012    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.348056    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.348128    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:51.348174    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.348214    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:51.348214    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.653438562Z" level=info msg="Starting up"
	I0501 04:16:51.348299    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.657791992Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:51.348332    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.663198880Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0501 04:16:51.348332    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.702542137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:51.348332    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732549261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:51.348412    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732711054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:51.348439    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732864148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:51.348439    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732947945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348439    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734019203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.348521    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734463486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348546    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735002764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.348546    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735178358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348666    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735234755Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:51.348706    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735254555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348706    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735695937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348777    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.736590002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348823    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739236298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.348862    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739286896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.348962    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739479489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.349004    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739575785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:51.349078    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740111064Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740186861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740203361Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747848861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:51.349104    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747973456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:51.349188    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748003155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:51.349188    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748021254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:51.349234    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748087351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:51.349234    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748176348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:51.349290    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748553033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.349314    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748726426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.349314    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748830822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:51.349387    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748853521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:51.349414    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748872121Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349414    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748887020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748901420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748916819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748932318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748946618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748960717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748974817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748996916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749013215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749071613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749094412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749109411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749127511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749141410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749156310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749171209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749210407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749241106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749261705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749287004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749377501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749401900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:51.349480    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749458198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:51.350019    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749553894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:51.350019    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749626691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749759886Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749839283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749953278Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:51.350094    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749974077Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:51.350198    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750421860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:51.350198    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750811045Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:51.350198    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751024636Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:51.350262    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751103833Z" level=info msg="containerd successfully booted in 0.052926s"
	I0501 04:16:51.350262    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.725111442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:51.350262    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.993003995Z" level=info msg="Loading containers: start."
	I0501 04:16:51.350325    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.418709237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:51.350325    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.511990518Z" level=info msg="Loading containers: done."
	I0501 04:16:51.350325    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.539659513Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:51.350392    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.540534438Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:51.350392    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.598935417Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:51.350450    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:51.350450    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.599463032Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:51.350450    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.764446334Z" level=info msg="Processing signal 'terminated'"
	I0501 04:16:51.350506    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 systemd[1]: Stopping Docker Application Container Engine...
	I0501 04:16:51.350506    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766325752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0501 04:16:51.350561    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766547266Z" level=info msg="Daemon shutdown complete"
	I0501 04:16:51.350561    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766599570Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766627071Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: docker.service: Deactivated successfully.
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Stopped Docker Application Container Engine.
	I0501 04:16:51.350614    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:51.350672    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.848356633Z" level=info msg="Starting up"
	I0501 04:16:51.350672    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.852105170Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:51.350727    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.856097222Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1051
	I0501 04:16:51.350727    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.886653253Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:51.350727    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918280652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:51.350821    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918435561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:51.350896    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918674977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:51.350938    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918835587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.350938    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918914392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.350938    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919007298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919224411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919342019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919363920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919374921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351015    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919401422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351136    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919522430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922355909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922472116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922606725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922701131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:51.351169    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922740333Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:51.351292    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922844740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:51.351292    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922863441Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:51.351330    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923199662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:51.351330    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923345572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:51.351330    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923371973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:51.351406    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923387074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:51.351406    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923416076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:51.351406    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923482380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:51.351508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923717595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.351508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923914208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:51.351562    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924012314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:51.351607    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924084218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:51.351659    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924103120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351659    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924116520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351659    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924137922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351738    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924154823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351738    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924172824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351825    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924195925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351880    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924208026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351905    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924219327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:51.351905    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352090    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924285031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352115    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924297632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352115    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924325534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352115    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352191    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924348235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924360536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924373137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924390538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352218    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924403039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352297    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924414139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352297    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924426140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352352    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924440741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:51.352392    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924459642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352537    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924475143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352616    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924504745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:51.352642    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924545247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:51.352642    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924640554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:51.352714    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924658655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:51.352740    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924671555Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:51.352740    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924736560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:51.352864    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924890569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:51.352952    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924908370Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:51.352998    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925252392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:51.352998    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925540810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:51.352998    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925606615Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:51.353056    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925720522Z" level=info msg="containerd successfully booted in 0.040328s"
	I0501 04:16:51.353056    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.902259635Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:51.353056    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.938734241Z" level=info msg="Loading containers: start."
	I0501 04:16:51.353164    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.252276255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:51.353247    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.346319398Z" level=info msg="Loading containers: done."
	I0501 04:16:51.353299    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374198460Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:51.353299    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374439776Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:51.353299    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424572544Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:51.353380    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424740154Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:51.353419    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:51.353419    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:51.353459    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:51.353459    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:51.353459    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:51.353514    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0501 04:16:51.353514    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Loaded network plugin cni"
	I0501 04:16:51.353562    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0501 04:16:51.353562    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0501 04:16:51.353622    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0501 04:16:51.353675    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0501 04:16:51.353694    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-8w9hq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88\""
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-cc6mk_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5\""
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-x9zrw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193\""
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.812954162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813140474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813251281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813750813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908552604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908932028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908977330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.909354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8e27176eab83655d3f2a52c63326669ef8c796c68155930f53f421789d826f1/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.353725    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022633513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022720619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022735220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.024008700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354390    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032104212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354390    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032117713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354390    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032205718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354463    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fd53aa8d8f5d6402b604adf1c8c8ae2b5a8c80b90e94152f45e7cb16a71fe46/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.354496    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51e331e75da779107616d5efa0d497152d9c85407f1c172c9ae536bcc2b22bad/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.354546    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.361204210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366294631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366382437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366929671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427356590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427966129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428178542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428971092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563334483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563717708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568278296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.354577    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568462908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619028803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619423228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619676644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.620258481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.647452681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648388440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648703160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650660084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650945902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.652733715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.653556567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703188303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703325612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703348713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.704951615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160153282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160628512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160751120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.161166246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303671652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303759357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304597710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355153    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304856126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355908    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623383256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355908    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623630372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623719877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.624154405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1045]: time="2024-05-01T04:16:15.086534690Z" level=info msg="ignoring event" container=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087315924Z" level=info msg="shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087789544Z" level=warning msg="cleaning up after shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.089400515Z" level=info msg="cleaning up dead shim" namespace=moby
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233206077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233350185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233373086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.235465402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.458837761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.459864323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464281891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464897329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543149980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543283788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543320690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543548404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598181021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598854262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.599065375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.600816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b85f507755ab5fd65a5328f5567d969dd5f974c01ee4c5d8e38f03dc6ec900a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.282921443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283150129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283743193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.291296831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360201124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360588900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360677995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.361100969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575166498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575320589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:51.355966    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.357033    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.576248232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:51.357033    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:51.357280    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:51.357340    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:51.390474    4352 logs.go:123] Gathering logs for kube-apiserver [18cd30f3ad28] ...
	I0501 04:16:51.390474    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd30f3ad28"
	I0501 04:16:51.422930    4352 command_runner.go:130] ! I0501 04:15:39.445795       1 options.go:221] external host was not specified, using 172.28.209.199
	I0501 04:16:51.422930    4352 command_runner.go:130] ! I0501 04:15:39.453956       1 server.go:148] Version: v1.30.0
	I0501 04:16:51.423357    4352 command_runner.go:130] ! I0501 04:15:39.454357       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.423357    4352 command_runner.go:130] ! I0501 04:15:40.258184       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 04:16:51.423357    4352 command_runner.go:130] ! I0501 04:15:40.258591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.260085       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.260405       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.261810       1 instance.go:299] Using reconciler: lease
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:40.801281       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:40.801386       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.090803       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.091252       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.359171       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.532740       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.570911       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.571018       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.571046       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.571875       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.572053       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.573317       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.574692       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.574726       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.574734       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.576633       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.576726       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.577645       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.577739       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.577748       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.578543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.578618       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.578731       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.579623       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.582482       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.582572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.582581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.583284       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.583417       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.583428       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.585084       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.585203       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.588956       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.589055       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.589067       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! I0501 04:15:41.589951       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.590056       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.423556    4352 command_runner.go:130] ! W0501 04:15:41.590066       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.424317    4352 command_runner.go:130] ! I0501 04:15:41.593577       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0501 04:16:51.424317    4352 command_runner.go:130] ! W0501 04:15:41.593674       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.593684       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.595694       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.597680       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.597864       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.597875       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.603955       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.604059       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! W0501 04:15:41.604069       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0501 04:16:51.424362    4352 command_runner.go:130] ! I0501 04:15:41.607445       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0501 04:16:51.424486    4352 command_runner.go:130] ! W0501 04:15:41.607533       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424486    4352 command_runner.go:130] ! W0501 04:15:41.607543       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:51.424534    4352 command_runner.go:130] ! I0501 04:15:41.608797       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0501 04:16:51.424534    4352 command_runner.go:130] ! W0501 04:15:41.608817       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.424871    4352 command_runner.go:130] ! I0501 04:15:41.625599       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0501 04:16:51.425324    4352 command_runner.go:130] ! W0501 04:15:41.625618       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:51.425515    4352 command_runner.go:130] ! I0501 04:15:42.332139       1 secure_serving.go:213] Serving securely on [::]:8443
	I0501 04:16:51.425573    4352 command_runner.go:130] ! I0501 04:15:42.332337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:51.425573    4352 command_runner.go:130] ! I0501 04:15:42.332595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:51.425573    4352 command_runner.go:130] ! I0501 04:15:42.333006       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0501 04:16:51.425642    4352 command_runner.go:130] ! I0501 04:15:42.333577       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 04:16:51.425642    4352 command_runner.go:130] ! I0501 04:15:42.333909       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:51.425695    4352 command_runner.go:130] ! I0501 04:15:42.334990       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 04:16:51.425695    4352 command_runner.go:130] ! I0501 04:15:42.335027       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 04:16:51.425695    4352 command_runner.go:130] ! I0501 04:15:42.335107       1 aggregator.go:163] waiting for initial CRD sync...
	I0501 04:16:51.425744    4352 command_runner.go:130] ! I0501 04:15:42.335378       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0501 04:16:51.425767    4352 command_runner.go:130] ! I0501 04:15:42.335424       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0501 04:16:51.425767    4352 command_runner.go:130] ! I0501 04:15:42.335517       1 available_controller.go:423] Starting AvailableConditionController
	I0501 04:16:51.425805    4352 command_runner.go:130] ! I0501 04:15:42.335533       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 04:16:51.425805    4352 command_runner.go:130] ! I0501 04:15:42.335556       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0501 04:16:51.425853    4352 command_runner.go:130] ! I0501 04:15:42.337835       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0501 04:16:51.425853    4352 command_runner.go:130] ! I0501 04:15:42.338196       1 controller.go:116] Starting legacy_token_tracking_controller
	I0501 04:16:51.425853    4352 command_runner.go:130] ! I0501 04:15:42.338360       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0501 04:16:51.425920    4352 command_runner.go:130] ! I0501 04:15:42.338519       1 controller.go:78] Starting OpenAPI AggregationController
	I0501 04:16:51.425920    4352 command_runner.go:130] ! I0501 04:15:42.339167       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0501 04:16:51.425920    4352 command_runner.go:130] ! I0501 04:15:42.339360       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0501 04:16:51.426076    4352 command_runner.go:130] ! I0501 04:15:42.339853       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 04:16:51.426076    4352 command_runner.go:130] ! I0501 04:15:42.361139       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 04:16:51.426076    4352 command_runner.go:130] ! I0501 04:15:42.361155       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 04:16:51.426138    4352 command_runner.go:130] ! I0501 04:15:42.361192       1 controller.go:139] Starting OpenAPI controller
	I0501 04:16:51.426138    4352 command_runner.go:130] ! I0501 04:15:42.361219       1 controller.go:87] Starting OpenAPI V3 controller
	I0501 04:16:51.426138    4352 command_runner.go:130] ! I0501 04:15:42.361233       1 naming_controller.go:291] Starting NamingConditionController
	I0501 04:16:51.426193    4352 command_runner.go:130] ! I0501 04:15:42.361253       1 establishing_controller.go:76] Starting EstablishingController
	I0501 04:16:51.426255    4352 command_runner.go:130] ! I0501 04:15:42.361274       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 04:16:51.426336    4352 command_runner.go:130] ! I0501 04:15:42.361288       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 04:16:51.426336    4352 command_runner.go:130] ! I0501 04:15:42.361301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 04:16:51.426336    4352 command_runner.go:130] ! I0501 04:15:42.395816       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:51.426397    4352 command_runner.go:130] ! I0501 04:15:42.396242       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:51.426453    4352 command_runner.go:130] ! I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:16:51.426453    4352 command_runner.go:130] ! I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:16:51.426534    4352 command_runner.go:130] ! I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:16:51.426672    4352 command_runner.go:130] ! I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:16:51.426769    4352 command_runner.go:130] ! I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:51.426769    4352 command_runner.go:130] ! I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:16:51.426769    4352 command_runner.go:130] ! I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:16:51.426830    4352 command_runner.go:130] ! I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 04:16:51.426830    4352 command_runner.go:130] ! W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:16:51.426830    4352 command_runner.go:130] ! I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:16:51.426830    4352 command_runner.go:130] ! I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:16:51.426897    4352 command_runner.go:130] ! I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:16:51.426966    4352 command_runner.go:130] ! I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 04:16:51.438427    4352 logs.go:123] Gathering logs for coredns [8a0208aeafcf] ...
	I0501 04:16:51.438972    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0208aeafcf"
	I0501 04:16:51.474551    4352 command_runner.go:130] > .:53
	I0501 04:16:51.474647    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:51.474647    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:51.474647    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:51.474684    4352 command_runner.go:130] > [INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	I0501 04:16:51.474684    4352 logs.go:123] Gathering logs for coredns [15c4496e3a9f] ...
	I0501 04:16:51.474684    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15c4496e3a9f"
	I0501 04:16:51.513087    4352 command_runner.go:130] > .:53
	I0501 04:16:51.513087    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:51.513087    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:51.513087    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:51.513087    4352 command_runner.go:130] > [INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	I0501 04:16:51.513847    4352 command_runner.go:130] > [INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	I0501 04:16:51.513847    4352 command_runner.go:130] > [INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	I0501 04:16:51.513892    4352 command_runner.go:130] > [INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	I0501 04:16:51.514013    4352 command_runner.go:130] > [INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	I0501 04:16:51.514106    4352 command_runner.go:130] > [INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	I0501 04:16:51.514175    4352 command_runner.go:130] > [INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	I0501 04:16:51.514175    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:51.514175    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:51.515850    4352 logs.go:123] Gathering logs for kube-scheduler [06f1f84bfde1] ...
	I0501 04:16:51.515884    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f1f84bfde1"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:10.476758       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175551       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175587       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.175678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.246151       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.246312       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.251800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.252170       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.253709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! I0501 03:52:12.254160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.257352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! E0501 03:52:12.257411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.261549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! E0501 03:52:12.261670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.556092    4352 command_runner.go:130] ! W0501 03:52:12.263856       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.556092    4352 command_runner.go:130] ! E0501 03:52:12.263906       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.556617    4352 command_runner.go:130] ! W0501 03:52:12.270038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.270597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.271080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.271309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.271808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.272098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.272396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.273177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.273396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.273765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.273964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.274273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.274741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.275083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! W0501 03:52:12.275448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.556706    4352 command_runner.go:130] ! E0501 03:52:12.275752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557369    4352 command_runner.go:130] ! W0501 03:52:12.276841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.278071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:12.277438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.278555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:12.279824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.280326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:12.280272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:12.280893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.100723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.101238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.102451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.102804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.188414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.188662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:51.557520    4352 command_runner.go:130] ! W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.558042    4352 command_runner.go:130] ! E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558128    4352 command_runner.go:130] ! W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:51.558839    4352 command_runner.go:130] ! I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:51.558839    4352 command_runner.go:130] ! E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	I0501 04:16:51.572999    4352 logs.go:123] Gathering logs for kube-proxy [502684407b0c] ...
	I0501 04:16:51.572999    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502684407b0c"
	I0501 04:16:51.604012    4352 command_runner.go:130] ! I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:51.604051    4352 command_runner.go:130] ! I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:51.605073    4352 logs.go:123] Gathering logs for kube-controller-manager [66a1b89e6733] ...
	I0501 04:16:51.605073    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1b89e6733"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:39.740014       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.254324       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.254368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.263842       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.264273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.265102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:40.265435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.420436       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.421597       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.430683       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.430949       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.431056       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.437281       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.440408       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.437711       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.440933       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.450877       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.452935       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.452958       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.458231       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.458525       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.458548       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.467611       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.468036       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.468093       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.468107       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.484825       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.484856       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.484892       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.485128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.485186       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:51.642424    4352 command_runner.go:130] ! I0501 04:15:44.485221       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.485229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.485246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.485322       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.643407    4352 command_runner.go:130] ! I0501 04:15:44.488601       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.488943       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.488958       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.488985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:51.643520    4352 command_runner.go:130] ! I0501 04:15:44.523143       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:51.643606    4352 command_runner.go:130] ! I0501 04:15:44.644894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:51.643606    4352 command_runner.go:130] ! I0501 04:15:44.645016       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:51.643645    4352 command_runner.go:130] ! I0501 04:15:44.645088       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:51.643682    4352 command_runner.go:130] ! I0501 04:15:44.645112       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:51.643708    4352 command_runner.go:130] ! I0501 04:15:44.646888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:51.643708    4352 command_runner.go:130] ! W0501 04:15:44.646984       1 shared_informer.go:597] resyncPeriod 15h44m19.234758052s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:51.643708    4352 command_runner.go:130] ! W0501 04:15:44.647035       1 shared_informer.go:597] resyncPeriod 17h52m42.538614251s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:51.643832    4352 command_runner.go:130] ! I0501 04:15:44.647224       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:51.643892    4352 command_runner.go:130] ! I0501 04:15:44.647325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:51.643940    4352 command_runner.go:130] ! I0501 04:15:44.647389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:51.643940    4352 command_runner.go:130] ! I0501 04:15:44.647418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:51.643996    4352 command_runner.go:130] ! I0501 04:15:44.647559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:51.643996    4352 command_runner.go:130] ! I0501 04:15:44.647580       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:51.644037    4352 command_runner.go:130] ! I0501 04:15:44.648269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:51.644083    4352 command_runner.go:130] ! I0501 04:15:44.648364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:51.644123    4352 command_runner.go:130] ! I0501 04:15:44.648387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648519       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:51.644176    4352 command_runner.go:130] ! I0501 04:15:44.648582       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:51.644313    4352 command_runner.go:130] ! I0501 04:15:44.648601       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:51.644313    4352 command_runner.go:130] ! I0501 04:15:44.648633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:51.644366    4352 command_runner.go:130] ! I0501 04:15:44.648662       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.649971       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.649999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.650094       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.658545       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.664070       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.664109       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.672333       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.672648       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.673224       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:51.644575    4352 command_runner.go:130] ! E0501 04:15:44.680086       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.680207       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.686271       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.687804       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.688087       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:51.644575    4352 command_runner.go:130] ! I0501 04:15:44.691064       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:51.645583    4352 command_runner.go:130] ! I0501 04:15:44.694139       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.694154       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.697309       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.697808       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.698725       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.709020       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.709557       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.718572       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.718866       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731386       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731502       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731520       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.731794       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.732008       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.732024       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.732060       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.739601       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.741937       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.742091       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.751335       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.758177       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.767021       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.776399       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.777830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.780031       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.783346       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.784386       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.784668       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.790586       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.791028       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.791148       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.795072       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.795486       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.796321       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.806964       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.807399       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.808302       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.810677       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.811276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.812128       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.814338       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.814699       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.815465       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.818437       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.819004       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.818976       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.820305       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.820518       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.822359       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.824878       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.825167       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.835687       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.835705       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.835739       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.836623       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! E0501 04:15:44.845522       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.845590       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.975590       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:44.975737       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.026863       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.026966       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.026980       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.188029       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.191154       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.191606       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.234916       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.235592       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.235855       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.275946       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.276219       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.277151       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:45.277668       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347039       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347226       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347657       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.347697       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.351170       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.351453       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.351701       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.352658       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.355868       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.356195       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.356581       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.373530       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.375966       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.376087       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.376099       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.381581       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.387752       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.398512       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.398855       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.433745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.433841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.434861       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.437855       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438314       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438445       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.438531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.441880       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.442281       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448289       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448378       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448532       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.448615       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.452662       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.453060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:51.645637    4352 command_runner.go:130] ! I0501 04:15:55.453136       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:51.647557    4352 command_runner.go:130] ! I0501 04:15:55.459094       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:51.647610    4352 command_runner.go:130] ! I0501 04:15:55.465378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:51.647610    4352 command_runner.go:130] ! I0501 04:15:55.468998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:51.647610    4352 command_runner.go:130] ! I0501 04:15:55.476103       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.479405       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.480400       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.485347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:51.647667    4352 command_runner.go:130] ! I0501 04:15:55.485423       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:51.647762    4352 command_runner.go:130] ! I0501 04:15:55.485459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:51.647762    4352 command_runner.go:130] ! I0501 04:15:55.488987       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:51.647797    4352 command_runner.go:130] ! I0501 04:15:55.489270       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:51.647797    4352 command_runner.go:130] ! I0501 04:15:55.492066       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:51.647797    4352 command_runner.go:130] ! I0501 04:15:55.492447       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:51.647832    4352 command_runner.go:130] ! I0501 04:15:55.494972       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.497059       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.499153       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.499594       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.509506       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.513444       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.517356       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.519269       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.521379       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.527109       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.533712       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.534052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562220       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562294       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562374       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:51.647969    4352 command_runner.go:130] ! I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:51.648551    4352 command_runner.go:130] ! I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:51.648601    4352 command_runner.go:130] ! I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:51.648700    4352 command_runner.go:130] ! I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:51.648746    4352 command_runner.go:130] ! I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:51.648877    4352 command_runner.go:130] ! I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:51.648924    4352 command_runner.go:130] ! I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	I0501 04:16:51.666522    4352 logs.go:123] Gathering logs for kindnet [b7cae3f6b88b] ...
	I0501 04:16:51.667538    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7cae3f6b88b"
	I0501 04:16:51.701538    4352 command_runner.go:130] ! I0501 04:15:45.341459       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.342196       1 main.go:107] hostIP = 172.28.209.199
	I0501 04:16:51.701634    4352 command_runner.go:130] ! podIP = 172.28.209.199
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.343348       1 main.go:116] setting mtu 1500 for CNI 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.343391       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:15:45.343412       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.765193       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.817499       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.817549       1 main.go:227] handling current node
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818026       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818042       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818289       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.219.162 Flags: [] Table: 0} 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818416       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818477       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:15.818548       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.834949       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.834995       1 main.go:227] handling current node
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835008       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835016       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835192       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:25.835220       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845752       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845835       1 main.go:227] handling current node
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845848       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.845856       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.846322       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:35.846423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.701634    4352 command_runner.go:130] ! I0501 04:16:45.855212       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.855323       1 main.go:227] handling current node
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.855339       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.855347       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.702179    4352 command_runner.go:130] ! I0501 04:16:45.856266       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.702257    4352 command_runner.go:130] ! I0501 04:16:45.856305       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.705299    4352 logs.go:123] Gathering logs for kindnet [6d5f881ef398] ...
	I0501 04:16:51.705379    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d5f881ef398"
	I0501 04:16:51.753678    4352 command_runner.go:130] ! I0501 04:01:59.122485       1 main.go:227] handling current node
	I0501 04:16:51.753770    4352 command_runner.go:130] ! I0501 04:01:59.122501       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.753770    4352 command_runner.go:130] ! I0501 04:01:59.122510       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.753770    4352 command_runner.go:130] ! I0501 04:01:59.122690       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.753823    4352 command_runner.go:130] ! I0501 04:01:59.122722       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.753823    4352 command_runner.go:130] ! I0501 04:02:09.153658       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153775       1 main.go:227] handling current node
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153793       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153946       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:09.153980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.753860    4352 command_runner.go:130] ! I0501 04:02:19.161031       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.753955    4352 command_runner.go:130] ! I0501 04:02:19.161061       1 main.go:227] handling current node
	I0501 04:16:51.753955    4352 command_runner.go:130] ! I0501 04:02:19.161073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.753955    4352 command_runner.go:130] ! I0501 04:02:19.161079       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754016    4352 command_runner.go:130] ! I0501 04:02:19.161177       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754016    4352 command_runner.go:130] ! I0501 04:02:19.161185       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181653       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181721       1 main.go:227] handling current node
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181735       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.181742       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.182277       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:29.182369       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.195902       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196079       1 main.go:227] handling current node
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196095       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196105       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754050    4352 command_runner.go:130] ! I0501 04:02:39.196558       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754222    4352 command_runner.go:130] ! I0501 04:02:39.196649       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754222    4352 command_runner.go:130] ! I0501 04:02:49.209858       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754222    4352 command_runner.go:130] ! I0501 04:02:49.209973       1 main.go:227] handling current node
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210027       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210041       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210461       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:49.210617       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754265    4352 command_runner.go:130] ! I0501 04:02:59.219550       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.219615       1 main.go:227] handling current node
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.219631       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.219638       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754331    4352 command_runner.go:130] ! I0501 04:02:59.220333       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754390    4352 command_runner.go:130] ! I0501 04:02:59.220436       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.231302       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.232437       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.232648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.232851       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.233578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:09.233631       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.245975       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246060       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246081       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246386       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:19.246423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.258941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259020       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259485       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:29.259520       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.269941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270129       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270152       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270161       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270403       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:39.270438       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.282880       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283025       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283045       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283054       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283773       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:49.283792       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297110       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297155       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297169       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297177       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297656       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:03:59.297688       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:04:09.310638       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:04:09.311476       1 main.go:227] handling current node
	I0501 04:16:51.754409    4352 command_runner.go:130] ! I0501 04:04:09.311969       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:09.312340       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:09.313291       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:09.313332       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.754939    4352 command_runner.go:130] ! I0501 04:04:19.324939       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755012    4352 command_runner.go:130] ! I0501 04:04:19.325084       1 main.go:227] handling current node
	I0501 04:16:51.755012    4352 command_runner.go:130] ! I0501 04:04:19.325480       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755012    4352 command_runner.go:130] ! I0501 04:04:19.325493       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755058    4352 command_runner.go:130] ! I0501 04:04:19.325923       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755058    4352 command_runner.go:130] ! I0501 04:04:19.326083       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755099    4352 command_runner.go:130] ! I0501 04:04:29.332468       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755099    4352 command_runner.go:130] ! I0501 04:04:29.332576       1 main.go:227] handling current node
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332619       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332645       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332818       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:29.332831       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755134    4352 command_runner.go:130] ! I0501 04:04:39.342867       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755214    4352 command_runner.go:130] ! I0501 04:04:39.342901       1 main.go:227] handling current node
	I0501 04:16:51.755214    4352 command_runner.go:130] ! I0501 04:04:39.342914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:39.342921       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:39.343433       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:39.343593       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364771       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364905       1 main.go:227] handling current node
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364921       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.364930       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.365166       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:49.365205       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:59.379243       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755248    4352 command_runner.go:130] ! I0501 04:04:59.379352       1 main.go:227] handling current node
	I0501 04:16:51.755358    4352 command_runner.go:130] ! I0501 04:04:59.379369       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755358    4352 command_runner.go:130] ! I0501 04:04:59.379377       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755401    4352 command_runner.go:130] ! I0501 04:04:59.379531       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755401    4352 command_runner.go:130] ! I0501 04:04:59.379564       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755401    4352 command_runner.go:130] ! I0501 04:05:09.389743       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755454    4352 command_runner.go:130] ! I0501 04:05:09.390518       1 main.go:227] handling current node
	I0501 04:16:51.755454    4352 command_runner.go:130] ! I0501 04:05:09.390622       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755454    4352 command_runner.go:130] ! I0501 04:05:09.390636       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755513    4352 command_runner.go:130] ! I0501 04:05:09.390894       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755513    4352 command_runner.go:130] ! I0501 04:05:09.391049       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755540    4352 command_runner.go:130] ! I0501 04:05:19.400837       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755540    4352 command_runner.go:130] ! I0501 04:05:19.401285       1 main.go:227] handling current node
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.401439       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.401572       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.401956       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755571    4352 command_runner.go:130] ! I0501 04:05:19.402136       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755625    4352 command_runner.go:130] ! I0501 04:05:29.422040       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755625    4352 command_runner.go:130] ! I0501 04:05:29.422249       1 main.go:227] handling current node
	I0501 04:16:51.755667    4352 command_runner.go:130] ! I0501 04:05:29.422285       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755667    4352 command_runner.go:130] ! I0501 04:05:29.422311       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755713    4352 command_runner.go:130] ! I0501 04:05:29.422521       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755713    4352 command_runner.go:130] ! I0501 04:05:29.422723       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429807       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429856       1 main.go:227] handling current node
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429874       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755756    4352 command_runner.go:130] ! I0501 04:05:39.429881       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755811    4352 command_runner.go:130] ! I0501 04:05:39.430903       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755811    4352 command_runner.go:130] ! I0501 04:05:39.431340       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755854    4352 command_runner.go:130] ! I0501 04:05:49.445455       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755854    4352 command_runner.go:130] ! I0501 04:05:49.445594       1 main.go:227] handling current node
	I0501 04:16:51.755854    4352 command_runner.go:130] ! I0501 04:05:49.445610       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755903    4352 command_runner.go:130] ! I0501 04:05:49.445619       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755903    4352 command_runner.go:130] ! I0501 04:05:49.445751       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755938    4352 command_runner.go:130] ! I0501 04:05:49.445765       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461135       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461248       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461264       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461273       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.461947       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:05:59.462094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469509       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469615       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469636       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.469646       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.470218       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:09.470387       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486501       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486605       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486621       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486629       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486864       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:19.486946       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503311       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503476       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503492       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503503       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503633       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:29.503843       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528749       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528837       1 main.go:227] handling current node
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528853       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.528861       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.529235       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:39.529373       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:49.535984       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.755969    4352 command_runner.go:130] ! I0501 04:06:49.536067       1 main.go:227] handling current node
	I0501 04:16:51.756550    4352 command_runner.go:130] ! I0501 04:06:49.536082       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756550    4352 command_runner.go:130] ! I0501 04:06:49.536092       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756550    4352 command_runner.go:130] ! I0501 04:06:49.536689       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756602    4352 command_runner.go:130] ! I0501 04:06:49.536802       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756602    4352 command_runner.go:130] ! I0501 04:06:59.550480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756642    4352 command_runner.go:130] ! I0501 04:06:59.551072       1 main.go:227] handling current node
	I0501 04:16:51.756642    4352 command_runner.go:130] ! I0501 04:06:59.551257       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:06:59.551358       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:06:59.551696       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:06:59.551781       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569460       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569627       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569642       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.569651       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.570296       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:09.570434       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577507       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577599       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577615       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.577730       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.578102       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:19.578208       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592703       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592845       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592861       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.592869       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.593139       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:29.593174       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602034       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602064       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602077       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602084       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602283       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:39.602300       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837563       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837638       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837652       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837660       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837875       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:49.837955       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.851818       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.852109       1 main.go:227] handling current node
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.852127       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.852753       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.853129       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:07:59.853164       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.756685    4352 command_runner.go:130] ! I0501 04:08:09.860338       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757288    4352 command_runner.go:130] ! I0501 04:08:09.860453       1 main.go:227] handling current node
	I0501 04:16:51.757288    4352 command_runner.go:130] ! I0501 04:08:09.860472       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757288    4352 command_runner.go:130] ! I0501 04:08:09.860482       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:09.860626       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:09.861316       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877403       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877515       1 main.go:227] handling current node
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877530       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877838       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757340    4352 command_runner.go:130] ! I0501 04:08:19.877874       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892899       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892926       1 main.go:227] handling current node
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892937       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757451    4352 command_runner.go:130] ! I0501 04:08:29.892944       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:29.893106       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:29.893180       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:39.901877       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757517    4352 command_runner.go:130] ! I0501 04:08:39.901929       1 main.go:227] handling current node
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.901943       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.901951       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.902578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757588    4352 command_runner.go:130] ! I0501 04:08:39.902678       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.918941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.919115       1 main.go:227] handling current node
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.919130       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757659    4352 command_runner.go:130] ! I0501 04:08:49.919139       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:49.919950       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:49.919968       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:59.933101       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:59.933154       1 main.go:227] handling current node
	I0501 04:16:51.757719    4352 command_runner.go:130] ! I0501 04:08:59.933648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757794    4352 command_runner.go:130] ! I0501 04:08:59.933667       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757794    4352 command_runner.go:130] ! I0501 04:08:59.934094       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757835    4352 command_runner.go:130] ! I0501 04:08:59.934127       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757835    4352 command_runner.go:130] ! I0501 04:09:09.948569       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757835    4352 command_runner.go:130] ! I0501 04:09:09.948615       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.948629       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.948637       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.949057       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:09.949076       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958099       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958261       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958282       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958294       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.958880       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:19.959055       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975626       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975765       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975790       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.975803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.976360       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:29.976488       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985296       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985455       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985488       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.985497       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.986552       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:39.986590       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.995944       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996021       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996649       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:09:49.996720       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003190       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003239       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003261       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003479       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:00.003516       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023328       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023430       1 main.go:227] handling current node
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023445       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.757876    4352 command_runner.go:130] ! I0501 04:10:10.023460       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:10.023613       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:10.023647       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030526       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030616       1 main.go:227] handling current node
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030632       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030641       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030856       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:20.030980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:30.038164       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:30.038263       1 main.go:227] handling current node
	I0501 04:16:51.758456    4352 command_runner.go:130] ! I0501 04:10:30.038278       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:30.038287       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:30.038931       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:30.039072       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:40.053866       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:40.053915       1 main.go:227] handling current node
	I0501 04:16:51.758638    4352 command_runner.go:130] ! I0501 04:10:40.053929       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:40.053936       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:40.054259       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:40.054295       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066490       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066542       1 main.go:227] handling current node
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066560       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.066567       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758724    4352 command_runner.go:130] ! I0501 04:10:50.067066       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:10:50.067210       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.075901       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.076052       1 main.go:227] handling current node
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.076069       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:00.076078       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.087907       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088124       1 main.go:227] handling current node
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088140       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088148       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.758807    4352 command_runner.go:130] ! I0501 04:11:10.088875       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:10.088954       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:10.089178       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:20.103399       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:20.103511       1 main.go:227] handling current node
	I0501 04:16:51.758941    4352 command_runner.go:130] ! I0501 04:11:20.103528       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759029    4352 command_runner.go:130] ! I0501 04:11:20.103538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759029    4352 command_runner.go:130] ! I0501 04:11:20.103879       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759029    4352 command_runner.go:130] ! I0501 04:11:20.103916       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759067    4352 command_runner.go:130] ! I0501 04:11:30.114473       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.115083       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.115256       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.115463       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.116474       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:30.116611       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124324       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124371       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124384       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124392       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124558       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:40.124570       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138059       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138102       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138116       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138123       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138826       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:11:50.138936       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155704       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155799       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155823       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.155832       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.156502       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:00.156549       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164706       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164754       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164767       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164774       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.164887       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:10.165094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.178957       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179142       1 main.go:227] handling current node
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179178       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759098    4352 command_runner.go:130] ! I0501 04:12:20.179694       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759680    4352 command_runner.go:130] ! I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:51.759849    4352 command_runner.go:130] ! I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:51.759992    4352 command_runner.go:130] ! I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:51.759992    4352 command_runner.go:130] ! I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:51.779907    4352 logs.go:123] Gathering logs for dmesg ...
	I0501 04:16:51.779907    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 04:16:51.808858    4352 command_runner.go:130] > [May 1 04:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0501 04:16:51.808950    4352 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0501 04:16:51.808950    4352 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.128235] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.023886] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.057986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +0.022012] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0501 04:16:51.808990    4352 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0501 04:16:51.808990    4352 command_runner.go:130] > [  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0501 04:16:51.809132    4352 command_runner.go:130] > [May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0501 04:16:51.809168    4352 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0501 04:16:51.809225    4352 command_runner.go:130] > [ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0501 04:16:51.809225    4352 command_runner.go:130] > [  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0501 04:16:51.809259    4352 command_runner.go:130] > [May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	I0501 04:16:51.809306    4352 command_runner.go:130] > [  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	I0501 04:16:51.809306    4352 command_runner.go:130] > [  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	I0501 04:16:51.809340    4352 command_runner.go:130] > [  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	I0501 04:16:51.809340    4352 command_runner.go:130] > [  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	I0501 04:16:51.809387    4352 command_runner.go:130] > [  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	I0501 04:16:51.809387    4352 command_runner.go:130] > [  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	I0501 04:16:51.809387    4352 command_runner.go:130] > [  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	I0501 04:16:51.809421    4352 command_runner.go:130] > [  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	I0501 04:16:51.809421    4352 command_runner.go:130] > [  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0501 04:16:51.809421    4352 command_runner.go:130] > [  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	I0501 04:16:51.809473    4352 command_runner.go:130] > [  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0501 04:16:51.809473    4352 command_runner.go:130] > [  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	I0501 04:16:51.809511    4352 command_runner.go:130] > [  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	I0501 04:16:51.809511    4352 command_runner.go:130] > [  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	I0501 04:16:51.809511    4352 command_runner.go:130] > [  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	I0501 04:16:51.812577    4352 logs.go:123] Gathering logs for describe nodes ...
	I0501 04:16:51.813154    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 04:16:52.084833    4352 command_runner.go:130] > Name:               multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] > Roles:              control-plane
	I0501 04:16:52.084833    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0501 04:16:52.084833    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:52.084833    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:52.084833    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	I0501 04:16:52.084833    4352 command_runner.go:130] > Taints:             <none>
	I0501 04:16:52.084833    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:52.084833    4352 command_runner.go:130] > Lease:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:52.084833    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:16:43 +0000
	I0501 04:16:52.084833    4352 command_runner.go:130] > Conditions:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0501 04:16:52.084833    4352 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0501 04:16:52.084833    4352 command_runner.go:130] >   MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0501 04:16:52.084833    4352 command_runner.go:130] >   DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0501 04:16:52.084833    4352 command_runner.go:130] >   PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0501 04:16:52.084833    4352 command_runner.go:130] >   Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	I0501 04:16:52.084833    4352 command_runner.go:130] > Addresses:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   InternalIP:  172.28.209.199
	I0501 04:16:52.084833    4352 command_runner.go:130] >   Hostname:    multinode-289800
	I0501 04:16:52.084833    4352 command_runner.go:130] > Capacity:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.084833    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.084833    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.084833    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.084833    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.084833    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:52.084833    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.084833    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.084833    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.084833    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.085420    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.085420    4352 command_runner.go:130] > System Info:
	I0501 04:16:52.085420    4352 command_runner.go:130] >   Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	I0501 04:16:52.085420    4352 command_runner.go:130] >   System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:52.085474    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:52.085474    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:52.085543    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:52.085543    4352 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0501 04:16:52.085543    4352 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0501 04:16:52.085581    4352 command_runner.go:130] > Non-terminated Pods:          (10 in total)
	I0501 04:16:52.085581    4352 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:52.085636    4352 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0501 04:16:52.085636    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-cc6mk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0501 04:16:52.085670    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 etcd-multinode-289800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         70s
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kindnet-vcxkr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-289800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-289800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-proxy-bp9zx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-289800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:52.085702    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:52.085702    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Resource           Requests     Limits
	I0501 04:16:52.085702    4352 command_runner.go:130] >   --------           --------     ------
	I0501 04:16:52.085702    4352 command_runner.go:130] >   cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0501 04:16:52.085702    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0501 04:16:52.085702    4352 command_runner.go:130] > Events:
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:52.085702    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:52.085702    4352 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-289800 status is now: NodeReady
	I0501 04:16:52.086256    4352 command_runner.go:130] >   Normal  Starting                 76s                kubelet          Starting kubelet.
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 76s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 76s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     75s (x7 over 76s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.086305    4352 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:52.086305    4352 command_runner.go:130] > Name:               multinode-289800-m02
	I0501 04:16:52.086398    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:52.086398    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:52.086398    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:52.086398    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m02
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:52.086443    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:52.086502    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:52.086502    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	I0501 04:16:52.086502    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:52.086545    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:52.086586    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:52.086586    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:52.086628    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	I0501 04:16:52.086628    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:52.086628    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:52.086701    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:52.086701    4352 command_runner.go:130] > Lease:
	I0501 04:16:52.086701    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m02
	I0501 04:16:52.086701    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:52.086701    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	I0501 04:16:52.086701    4352 command_runner.go:130] > Conditions:
	I0501 04:16:52.086748    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:52.086785    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:52.086817    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086817    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086817    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086881    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.086881    4352 command_runner.go:130] > Addresses:
	I0501 04:16:52.086881    4352 command_runner.go:130] >   InternalIP:  172.28.219.162
	I0501 04:16:52.086881    4352 command_runner.go:130] >   Hostname:    multinode-289800-m02
	I0501 04:16:52.086923    4352 command_runner.go:130] > Capacity:
	I0501 04:16:52.086923    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.086923    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.086923    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.086923    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.086973    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.086973    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:52.086973    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.086973    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.087016    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.087016    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.087016    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.087016    4352 command_runner.go:130] > System Info:
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	I0501 04:16:52.087016    4352 command_runner.go:130] >   System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:52.087016    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:52.087016    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:52.087551    4352 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0501 04:16:52.087551    4352 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0501 04:16:52.087551    4352 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0501 04:16:52.087597    4352 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:52.087597    4352 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0501 04:16:52.087671    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-tbxxx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0501 04:16:52.087671    4352 command_runner.go:130] >   kube-system                 kindnet-gzz7p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0501 04:16:52.087708    4352 command_runner.go:130] >   kube-system                 kube-proxy-rlzp8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0501 04:16:52.087708    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:52.087750    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:52.087750    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:52.087750    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:52.087786    4352 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0501 04:16:52.087786    4352 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0501 04:16:52.087786    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:52.087827    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:52.087827    4352 command_runner.go:130] > Events:
	I0501 04:16:52.087827    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:52.087827    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:52.087827    4352 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0501 04:16:52.087883    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	I0501 04:16:52.087883    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.087931    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	I0501 04:16:52.087931    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.087931    4352 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:52.087988    4352 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	I0501 04:16:52.087988    4352 command_runner.go:130] >   Normal  RegisteredNode           57s                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:52.088030    4352 command_runner.go:130] >   Normal  NodeNotReady             17s                node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	I0501 04:16:52.088030    4352 command_runner.go:130] > Name:               multinode-289800-m03
	I0501 04:16:52.088030    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:52.088084    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:52.088084    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:52.088084    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m03
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:52.088141    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:52.088193    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:52.088193    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	I0501 04:16:52.088234    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:52.088234    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:52.088274    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:52.088274    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:52.088274    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	I0501 04:16:52.088314    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:52.088314    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:52.088314    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:52.088314    4352 command_runner.go:130] > Lease:
	I0501 04:16:52.088365    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m03
	I0501 04:16:52.088365    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:52.088365    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	I0501 04:16:52.088365    4352 command_runner.go:130] > Conditions:
	I0501 04:16:52.088406    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:52.088406    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:52.088446    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:52.088486    4352 command_runner.go:130] > Addresses:
	I0501 04:16:52.088538    4352 command_runner.go:130] >   InternalIP:  172.28.223.145
	I0501 04:16:52.088538    4352 command_runner.go:130] >   Hostname:    multinode-289800-m03
	I0501 04:16:52.088538    4352 command_runner.go:130] > Capacity:
	I0501 04:16:52.088579    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.088579    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.088579    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.088579    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.088579    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.088620    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:52.088620    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:52.088620    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:52.088661    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:52.088661    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:52.088661    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:52.088712    4352 command_runner.go:130] > System Info:
	I0501 04:16:52.088712    4352 command_runner.go:130] >   Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	I0501 04:16:52.088712    4352 command_runner.go:130] >   System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	I0501 04:16:52.088753    4352 command_runner.go:130] >   Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	I0501 04:16:52.088753    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:52.088793    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:52.088793    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:52.088793    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:52.088833    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:52.088833    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:52.088833    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:52.088833    4352 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0501 04:16:52.088833    4352 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0501 04:16:52.088902    4352 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0501 04:16:52.088944    4352 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:52.088944    4352 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0501 04:16:52.088986    4352 command_runner.go:130] >   kube-system                 kindnet-4m5vg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0501 04:16:52.088986    4352 command_runner.go:130] >   kube-system                 kube-proxy-g8mbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0501 04:16:52.088986    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:52.089028    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:52.089028    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:52.089028    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:52.089028    4352 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0501 04:16:52.089028    4352 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0501 04:16:52.089081    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:52.089081    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:52.089081    4352 command_runner.go:130] > Events:
	I0501 04:16:52.089081    4352 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0501 04:16:52.089159    4352 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0501 04:16:52.089193    4352 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0501 04:16:52.089193    4352 command_runner.go:130] >   Normal  Starting                 5m44s                  kube-proxy       
	I0501 04:16:52.089252    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.089252    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:52.089285    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.089315    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:52.089315    4352 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:52.089366    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:52.089366    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:52.089406    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:52.089406    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m48s                  kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:52.089457    4352 command_runner.go:130] >   Normal  RegisteredNode           5m43s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:52.089457    4352 command_runner.go:130] >   Normal  NodeReady                5m41s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:52.089497    4352 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	I0501 04:16:52.089497    4352 command_runner.go:130] >   Normal  RegisteredNode           57s                    node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:52.099507    4352 logs.go:123] Gathering logs for coredns [3e8d5ff9a9e4] ...
	I0501 04:16:52.099507    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8d5ff9a9e4"
	I0501 04:16:52.144980    4352 command_runner.go:130] > .:53
	I0501 04:16:52.145028    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:52.145160    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:52.145160    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:52.145160    4352 command_runner.go:130] > [INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	I0501 04:16:52.145160    4352 command_runner.go:130] > [INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	I0501 04:16:52.145160    4352 command_runner.go:130] > [INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	I0501 04:16:52.145254    4352 command_runner.go:130] > [INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	I0501 04:16:52.145408    4352 command_runner.go:130] > [INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	I0501 04:16:52.145408    4352 command_runner.go:130] > [INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	I0501 04:16:52.145408    4352 command_runner.go:130] > [INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	I0501 04:16:52.145503    4352 command_runner.go:130] > [INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	I0501 04:16:52.145503    4352 command_runner.go:130] > [INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	I0501 04:16:52.145503    4352 command_runner.go:130] > [INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	I0501 04:16:52.145619    4352 command_runner.go:130] > [INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	I0501 04:16:52.145720    4352 command_runner.go:130] > [INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	I0501 04:16:52.145720    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:52.145720    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:52.147441    4352 logs.go:123] Gathering logs for container status ...
	I0501 04:16:52.147441    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 04:16:52.225239    4352 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0501 04:16:52.225301    4352 command_runner.go:130] > 1efd236274eb6       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	I0501 04:16:52.225301    4352 command_runner.go:130] > b8a9b405d76be       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:52.225301    4352 command_runner.go:130] > 8a0208aeafcfe       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:52.225301    4352 command_runner.go:130] > 239a5dfd3ae52       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	I0501 04:16:52.225894    4352 command_runner.go:130] > b7cae3f6b88bc       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	I0501 04:16:52.225894    4352 command_runner.go:130] > 01deddefba52a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	I0501 04:16:52.225894    4352 command_runner.go:130] > 3efcc92f817ee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	I0501 04:16:52.226000    4352 command_runner.go:130] > 34892fdb68983       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 18cd30f3ad28f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 66a1b89e6733f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > eaf69fce5ee36       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	I0501 04:16:52.226081    4352 command_runner.go:130] > 15c4496e3a9f0       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:52.226081    4352 command_runner.go:130] > 3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:52.226081    4352 command_runner.go:130] > 6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago       Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	I0501 04:16:52.226081    4352 command_runner.go:130] > 502684407b0cf       a0bf559e280cf                                                                                         24 minutes ago       Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	I0501 04:16:52.226081    4352 command_runner.go:130] > 4b62556f40bec       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	I0501 04:16:52.226081    4352 command_runner.go:130] > 06f1f84bfde17       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	I0501 04:16:52.233427    4352 logs.go:123] Gathering logs for kubelet ...
	I0501 04:16:52.234031    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 04:16:52.273162    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.273678    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875075    1383 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.273732    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875223    1383 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.273732    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.876800    1383 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.273789    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: E0501 04:15:32.877636    1383 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:52.273826    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:52.273850    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:52.273850    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:52.273910    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.273910    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.273954    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.593311    1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.273954    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.595065    1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.274008    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.597316    1424 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.274008    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: E0501 04:15:33.597441    1424 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:52.274050    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:52.274050    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:52.274050    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:52.274097    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274138    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274138    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327211    1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.274184    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327674    1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.274184    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.328505    1461 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.274226    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: E0501 04:15:34.328669    1461 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:52.274226    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:52.274281    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:52.274281    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274322    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:52.274322    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.796836    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:52.274376    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797219    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.274432    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797640    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:52.274432    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.799493    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0501 04:16:52.274485    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.812278    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846443    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846668    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847577    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847671    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-289800","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848600    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848674    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.849347    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851250    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851388    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851480    1525 kubelet.go:312] "Adding apiserver pod source"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.852014    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.863109    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.868847    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869729    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.870640    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.871055    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869620    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.872992    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872208    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.874268    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872162    1525 server.go:1264] "Started kubelet"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.876600    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.878390    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.882899    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0501 04:16:52.274543    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.888275    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.209.199:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-289800.17cb4242948ce646  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-289800,UID:multinode-289800,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-289800,},FirstTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,LastTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-2
89800,}"
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.894478    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.899264    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="200ms"
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900556    1525 factory.go:221] Registration of the systemd container factory successfully
	I0501 04:16:52.275619    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900703    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900931    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.909390    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.922744    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.923300    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961054    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0501 04:16:52.275736    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961177    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961311    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962539    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962613    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0501 04:16:52.275896    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962649    1525 policy_none.go:49] "None policy: Start"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.965264    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.981258    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.991286    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.994410    1525 state_mem.go:75] "Updated machine memory state"
	I0501 04:16:52.275984    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.001037    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0501 04:16:52.276063    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.005977    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0501 04:16:52.276063    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.012301    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.276063    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.018582    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.020477    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.020620    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.021548    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-289800\" not found"
	I0501 04:16:52.276148    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022495    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0501 04:16:52.276231    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022690    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0501 04:16:52.276231    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022715    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0501 04:16:52.276231    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.022919    1525 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0501 04:16:52.276313    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.028696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.276395    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.028755    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.276395    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.045316    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:52.276460    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:52.276497    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:52.276497    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:52.276497    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:52.276567    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.102048    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="400ms"
	I0501 04:16:52.276567    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.124062    1525 topology_manager.go:215] "Topology Admit Handler" podUID="44d7830a7c97b8c7e460c0508d02be4e" podNamespace="kube-system" podName="kube-scheduler-multinode-289800"
	I0501 04:16:52.276567    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.125237    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8b70cd8d31103a1cfca45e9856766786" podNamespace="kube-system" podName="kube-apiserver-multinode-289800"
	I0501 04:16:52.276651    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.126693    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a17001fd2508d58fea9b1ae465b65254" podNamespace="kube-system" podName="kube-controller-manager-multinode-289800"
	I0501 04:16:52.276651    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.129279    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b12e9024402f49cfac7440d6a2eaf42d" podNamespace="kube-system" podName="etcd-multinode-289800"
	I0501 04:16:52.276651    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132159    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479b3ec741befe4b1eddeb02949bcd198e18fa7dc4c196283e811e273e4edcbd"
	I0501 04:16:52.276769    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132205    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88"
	I0501 04:16:52.276769    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132217    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df6ba73bcf683d21156e67827524b826f94059250b12cf08abd23da8345923a"
	I0501 04:16:52.276804    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132236    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a338ea43bd9b03a0a56c5b614e36fd54cdd707fb4c2f5819a814e4ffd9bdcb65"
	I0501 04:16:52.276804    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.139102    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a1c5b5cdd65332e27f08445a684fc2d2f586ab1b8a2fb2c5c0dfc02b71165"
	I0501 04:16:52.276876    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.158602    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737"
	I0501 04:16:52.276876    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.174190    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bb6a06ed527b42fe74673579e4a788915c66cd3717c52a344c73e0b7d12b34"
	I0501 04:16:52.276876    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.191042    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5"
	I0501 04:16:52.276976    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.208222    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214646    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-ca-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214710    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-k8s-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214752    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-kubeconfig\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214812    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214855    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-data\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214875    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d7830a7c97b8c7e460c0508d02be4e-kubeconfig\") pod \"kube-scheduler-multinode-289800\" (UID: \"44d7830a7c97b8c7e460c0508d02be4e\") " pod="kube-system/kube-scheduler-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214899    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-ca-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214925    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-k8s-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214950    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-flexvolume-dir\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214973    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214994    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-certs\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.222614    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.223837    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.227891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.504248    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="800ms"
	I0501 04:16:52.277038    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.625269    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.625998    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.852634    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.852740    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277621    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.063749    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277746    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.063859    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277820    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.260487    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93"
	I0501 04:16:52.277862    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.306204    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="1.6s"
	I0501 04:16:52.277862    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.357883    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277936    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.357983    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277976    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.424248    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.277976    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.424377    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:52.278049    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.428960    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.278049    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.431040    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:52.278137    4352 command_runner.go:130] > May 01 04:15:40 multinode-289800 kubelet[1525]: I0501 04:15:40.032371    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:52.278137    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.639150    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-289800"
	I0501 04:16:52.278137    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.640030    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-289800"
	I0501 04:16:52.278217    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.642970    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0501 04:16:52.278217    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.644297    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0501 04:16:52.278298    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.646032    1525 setters.go:580] "Node became not ready" node="multinode-289800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-01T04:15:42Z","lastTransitionTime":"2024-05-01T04:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0501 04:16:52.278298    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.869832    1525 apiserver.go:52] "Watching apiserver"
	I0501 04:16:52.278298    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875356    1525 topology_manager.go:215] "Topology Admit Handler" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w9hq"
	I0501 04:16:52.278380    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875613    1525 topology_manager.go:215] "Topology Admit Handler" podUID="aba82e50-b8f8-40b4-b08a-6d045314d6b6" podNamespace="kube-system" podName="kube-proxy-bp9zx"
	I0501 04:16:52.278380    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875753    1525 topology_manager.go:215] "Topology Admit Handler" podUID="0b91b14d-bed3-4889-b193-db53daccd395" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9zrw"
	I0501 04:16:52.278488    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875936    1525 topology_manager.go:215] "Topology Admit Handler" podUID="72ef61d4-4437-40da-86e7-4d7eb386b6de" podNamespace="kube-system" podName="kindnet-vcxkr"
	I0501 04:16:52.278488    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876061    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5" podNamespace="kube-system" podName="storage-provisioner"
	I0501 04:16:52.278575    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876192    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	I0501 04:16:52.278575    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.876527    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.278656    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.877384    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:52.278656    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.878678    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:52.278656    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.879595    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.278736    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.884364    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.278736    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.910944    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0501 04:16:52.278814    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.938877    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-xtables-lock\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:52.278814    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939029    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8d2a827-d9a6-419a-a076-c7695a16a2b5-tmp\") pod \"storage-provisioner\" (UID: \"b8d2a827-d9a6-419a-a076-c7695a16a2b5\") " pod="kube-system/storage-provisioner"
	I0501 04:16:52.278892    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939149    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-xtables-lock\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:52.278892    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-cni-cfg\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:52.278972    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939318    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-lib-modules\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:52.278972    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-lib-modules\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:52.279130    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940207    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279208    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940401    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440364296 +0000 UTC m=+6.726863016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279208    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940680    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279289    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940822    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440808324 +0000 UTC m=+6.727307144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279289    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.948736    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-289800"
	I0501 04:16:52.279367    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.958916    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.279367    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975690    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279489    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975737    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279489    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.475811436 +0000 UTC m=+6.762310156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279567    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.052812    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17e9f88f256f5527a6565eb2da75f63" path="/var/lib/kubelet/pods/c17e9f88f256f5527a6565eb2da75f63/volumes"
	I0501 04:16:52.279646    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.054400    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7b6f2a7c826774b66af910f598e965" path="/var/lib/kubelet/pods/fc7b6f2a7c826774b66af910f598e965/volumes"
	I0501 04:16:52.279646    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170146    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-289800" podStartSLOduration=1.170112215 podStartE2EDuration="1.170112215s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.140058816 +0000 UTC m=+6.426557536" watchObservedRunningTime="2024-05-01 04:15:43.170112215 +0000 UTC m=+6.456610935"
	I0501 04:16:52.279728    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170304    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-289800" podStartSLOduration=1.170298327 podStartE2EDuration="1.170298327s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.16893474 +0000 UTC m=+6.455433460" watchObservedRunningTime="2024-05-01 04:15:43.170298327 +0000 UTC m=+6.456797147"
	I0501 04:16:52.279728    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444132    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279886    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444229    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444209637 +0000 UTC m=+7.730708457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444591    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444633    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444622763 +0000 UTC m=+7.731121483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.544921    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545047    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545141    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.545110913 +0000 UTC m=+7.831609633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.039213    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.378804    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.401946    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402229    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402476    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:52.279945    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.403391    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454688    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454983    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.454902809 +0000 UTC m=+9.741401629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455515    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.281601    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455560    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.45554895 +0000 UTC m=+9.742047670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.283204    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555732    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555836    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555920    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.55587479 +0000 UTC m=+9.842373510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028227    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028491    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.023829    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486637    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486963    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.486942526 +0000 UTC m=+13.773441346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.488686    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.489077    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.488847647 +0000 UTC m=+13.775346467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587833    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587977    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.283414    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.588185    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.588160623 +0000 UTC m=+13.874659443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284010    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.027084    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.028397    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:48 multinode-289800 kubelet[1525]: E0501 04:15:48.022969    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.024347    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.025248    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.024175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523387    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523508    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.523488538 +0000 UTC m=+21.809987358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524104    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.284104    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524150    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.524137716 +0000 UTC m=+21.810636436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.284785    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.624897    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284913    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625357    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625742    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.625719971 +0000 UTC m=+21.912218691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024464    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024959    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:52 multinode-289800 kubelet[1525]: E0501 04:15:52.024016    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.023669    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.024381    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:54 multinode-289800 kubelet[1525]: E0501 04:15:54.023529    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.023399    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.024039    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:56 multinode-289800 kubelet[1525]: E0501 04:15:56.023961    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.284955    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.024583    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.285533    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.025562    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.285533    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.024494    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606520    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606584    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.606569125 +0000 UTC m=+37.893067945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607052    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607095    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.607084827 +0000 UTC m=+37.893583547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.707959    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.285687    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708171    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.286243    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708240    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.708221599 +0000 UTC m=+37.994720419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.024158    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.025055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:00 multinode-289800 kubelet[1525]: E0501 04:16:00.023216    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.024905    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.025585    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:02 multinode-289800 kubelet[1525]: E0501 04:16:02.024143    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.023409    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.024062    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:04 multinode-289800 kubelet[1525]: E0501 04:16:04.023182    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.028055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.029254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286348    4352 command_runner.go:130] > May 01 04:16:06 multinode-289800 kubelet[1525]: E0501 04:16:06.024522    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.024384    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.025431    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:08 multinode-289800 kubelet[1525]: E0501 04:16:08.024168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.024117    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.025560    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:10 multinode-289800 kubelet[1525]: E0501 04:16:10.023881    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.023619    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.024277    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.286937    4352 command_runner.go:130] > May 01 04:16:12 multinode-289800 kubelet[1525]: E0501 04:16:12.024236    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.287472    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023153    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023926    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.023335    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657138    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657461    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.657440103 +0000 UTC m=+69.943938823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657218    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657858    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.65783162 +0000 UTC m=+69.944330440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758303    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758421    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758487    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.758469083 +0000 UTC m=+70.044967903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.023369    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.024797    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.886834    1525 scope.go:117] "RemoveContainer" containerID="ee2238f98e350e8d80528b60fc5b614ce6048d8b34af2034a9947e26d8e6beab"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.887225    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.887510    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b8d2a827-d9a6-419a-a076-c7695a16a2b5)\"" pod="kube-system/storage-provisioner" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: E0501 04:16:16.024360    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:52.287625    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: I0501 04:16:16.618138    1525 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 kubelet[1525]: I0501 04:16:29.024408    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.040204    1525 scope.go:117] "RemoveContainer" containerID="3244d1ee5ab428faf09a962609f2c940c36a998727a01b873d382eb5ee600ca3"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	I0501 04:16:52.288525    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	I0501 04:16:52.349088    4352 logs.go:123] Gathering logs for kube-scheduler [eaf69fce5ee3] ...
	I0501 04:16:52.349088    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaf69fce5ee3"
	I0501 04:16:52.379701    4352 command_runner.go:130] ! I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:52.380642    4352 command_runner.go:130] ! W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:52.380693    4352 command_runner.go:130] ! W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:52.380730    4352 command_runner.go:130] ! W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:52.380780    4352 command_runner.go:130] ! W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:52.380780    4352 command_runner.go:130] ! I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:52.380855    4352 command_runner.go:130] ! I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:52.382997    4352 logs.go:123] Gathering logs for kube-controller-manager [4b62556f40be] ...
	I0501 04:16:52.382997    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b62556f40be"
	I0501 04:16:52.419922    4352 command_runner.go:130] ! I0501 03:52:09.899238       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:52.419922    4352 command_runner.go:130] ! I0501 03:52:10.399398       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:52.420177    4352 command_runner.go:130] ! I0501 03:52:10.399463       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.408364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.409326       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.409600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:10.409803       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.177592       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.177638       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.223373       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.223482       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.224504       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.255847       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:52.420211    4352 command_runner.go:130] ! I0501 03:52:15.268264       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:52.420374    4352 command_runner.go:130] ! I0501 03:52:15.268388       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:52.420374    4352 command_runner.go:130] ! I0501 03:52:15.282022       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:52.420374    4352 command_runner.go:130] ! I0501 03:52:15.318646       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:52.420420    4352 command_runner.go:130] ! I0501 03:52:15.318861       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:52.420480    4352 command_runner.go:130] ! I0501 03:52:15.319086       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:52.420480    4352 command_runner.go:130] ! I0501 03:52:15.319104       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:52.420480    4352 command_runner.go:130] ! I0501 03:52:15.319092       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:52.420523    4352 command_runner.go:130] ! I0501 03:52:15.340327       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:52.420571    4352 command_runner.go:130] ! I0501 03:52:15.340404       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:52.420571    4352 command_runner.go:130] ! I0501 03:52:15.340939       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:52.420607    4352 command_runner.go:130] ! I0501 03:52:15.388809       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:52.420607    4352 command_runner.go:130] ! I0501 03:52:15.389274       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:52.420661    4352 command_runner.go:130] ! I0501 03:52:15.389544       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:52.420661    4352 command_runner.go:130] ! I0501 03:52:15.409254       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:52.420695    4352 command_runner.go:130] ! I0501 03:52:15.409799       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:52.420695    4352 command_runner.go:130] ! I0501 03:52:15.410052       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:52.420695    4352 command_runner.go:130] ! I0501 03:52:15.410231       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:52.420727    4352 command_runner.go:130] ! I0501 03:52:15.430420       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.432551       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.432922       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.433117       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:52.422595    4352 command_runner.go:130] ! E0501 03:52:15.460293       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.460569       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.483810       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.484552       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.487659       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.507112       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.507311       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.507323       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547225       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547300       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547313       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.547413       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.652954       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.653222       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.653240       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940714       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.940787       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941029       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941118       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:52.422595    4352 command_runner.go:130] ! I0501 03:52:15.941344       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:52.423257    4352 command_runner.go:130] ! I0501 03:52:15.941368       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941421       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941627       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.941813       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942319       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942400       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942767       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:15.942791       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.183841       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.184178       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.187151       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.187185       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.436175       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.436331       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.436346       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.586198       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.586602       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.586642       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736534       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736573       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736609       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736694       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.736706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.891482       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.891648       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:52.423307    4352 command_runner.go:130] ! I0501 03:52:16.891663       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:52.423866    4352 command_runner.go:130] ! I0501 03:52:17.047956       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050852       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050942       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.050952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051046       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051073       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051107       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051130       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051145       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051548       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.051654       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.186932       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:17.187092       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.350786       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.351166       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.352026       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.353715       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.368884       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.369241       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.369602       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.424182       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.424472       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.436663       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.437080       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.437177       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.448635       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.449170       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.449409       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.475565       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:52.423910    4352 command_runner.go:130] ! I0501 03:52:27.476051       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:52.424476    4352 command_runner.go:130] ! I0501 03:52:27.476166       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.479486       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.479596       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.479975       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.480750       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.480823       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:52.424591    4352 command_runner.go:130] ! E0501 03:52:27.482546       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.483210       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.495640       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.495973       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.496212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.512223       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.512895       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.513075       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.514982       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.515311       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.515499       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.526940       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.527318       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.527351       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647646       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647752       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647825       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.647836       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.692531       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.692762       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.693221       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.693310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.846904       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.847065       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.847083       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:52.424591    4352 command_runner.go:130] ! I0501 03:52:27.996304       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:52.425167    4352 command_runner.go:130] ! I0501 03:52:27.996661       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:52.425167    4352 command_runner.go:130] ! I0501 03:52:27.996720       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:52.425167    4352 command_runner.go:130] ! I0501 03:52:28.149439       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:52.425331    4352 command_runner.go:130] ! I0501 03:52:28.149690       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:52.425375    4352 command_runner.go:130] ! I0501 03:52:28.149796       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:52.425505    4352 command_runner.go:130] ! I0501 03:52:28.194448       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:52.425505    4352 command_runner.go:130] ! I0501 03:52:28.194582       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:52.425659    4352 command_runner.go:130] ! I0501 03:52:28.346263       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:52.425794    4352 command_runner.go:130] ! I0501 03:52:28.351074       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.351267       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.389327       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.399508       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.401911       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.402772       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.414043       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.415874       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.427291       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.436570       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.437221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.437315       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.440984       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.447483       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.447500       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.448218       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451115       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451224       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451726       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451933       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.451734       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.470928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.476835       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.486851       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.487294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.507418       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.510921       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.537591       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:52.425859    4352 command_runner.go:130] ! I0501 03:52:28.575135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.595083       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.609954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.621070       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.625042       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.628085       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:52.426587    4352 command_runner.go:130] ! I0501 03:52:28.643871       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.653497       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.654871       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.654996       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.655710       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.655972       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.656192       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.675109       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800" podCIDRs=["10.244.0.0/24"]
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682120       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.682855       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.688787       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.693874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:28.697526       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.088696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.088746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.139257       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 04:16:52.426718    4352 command_runner.go:130] ! I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:52.427241    4352 command_runner.go:130] ! I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:52.427349    4352 command_runner.go:130] ! I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:54.971275    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:16:55.002948    4352 command_runner.go:130] > 1873
	I0501 04:16:55.004048    4352 api_server.go:72] duration metric: took 1m7.1057338s to wait for apiserver process to appear ...
	I0501 04:16:55.004146    4352 api_server.go:88] waiting for apiserver healthz status ...
	I0501 04:16:55.014570    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0501 04:16:55.045902    4352 command_runner.go:130] > 18cd30f3ad28
	I0501 04:16:55.045902    4352 logs.go:276] 1 containers: [18cd30f3ad28]
	I0501 04:16:55.059307    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0501 04:16:55.087490    4352 command_runner.go:130] > 34892fdb6898
	I0501 04:16:55.088578    4352 logs.go:276] 1 containers: [34892fdb6898]
	I0501 04:16:55.100098    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0501 04:16:55.125435    4352 command_runner.go:130] > b8a9b405d76b
	I0501 04:16:55.125435    4352 command_runner.go:130] > 8a0208aeafcf
	I0501 04:16:55.125435    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:16:55.125435    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:16:55.125534    4352 logs.go:276] 4 containers: [b8a9b405d76b 8a0208aeafcf 15c4496e3a9f 3e8d5ff9a9e4]
	I0501 04:16:55.136812    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0501 04:16:55.161323    4352 command_runner.go:130] > eaf69fce5ee3
	I0501 04:16:55.161323    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:16:55.161323    4352 logs.go:276] 2 containers: [eaf69fce5ee3 06f1f84bfde1]
	I0501 04:16:55.171247    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0501 04:16:55.209491    4352 command_runner.go:130] > 3efcc92f817e
	I0501 04:16:55.209538    4352 command_runner.go:130] > 502684407b0c
	I0501 04:16:55.209538    4352 logs.go:276] 2 containers: [3efcc92f817e 502684407b0c]
	I0501 04:16:55.221292    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0501 04:16:55.245849    4352 command_runner.go:130] > 66a1b89e6733
	I0501 04:16:55.245849    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:16:55.247168    4352 logs.go:276] 2 containers: [66a1b89e6733 4b62556f40be]
	I0501 04:16:55.260218    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0501 04:16:55.288049    4352 command_runner.go:130] > b7cae3f6b88b
	I0501 04:16:55.288155    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:16:55.288155    4352 logs.go:276] 2 containers: [b7cae3f6b88b 6d5f881ef398]
	I0501 04:16:55.288236    4352 logs.go:123] Gathering logs for kube-proxy [502684407b0c] ...
	I0501 04:16:55.288236    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502684407b0c"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:55.318931    4352 command_runner.go:130] ! I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:55.322427    4352 logs.go:123] Gathering logs for Docker ...
	I0501 04:16:55.322519    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0501 04:16:55.359690    4352 command_runner.go:130] > May 01 04:14:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.359783    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:55.359876    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.359876    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.359876    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.359944    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:55.359944    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.359984    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.359984    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.360023    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.653438562Z" level=info msg="Starting up"
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.657791992Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.663198880Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.702542137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:55.360356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732549261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732711054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732864148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732947945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360465    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734019203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360562    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734463486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735002764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735178358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735234755Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:55.360599    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735254555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735695937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.736590002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739236298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739286896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.360871    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739479489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.360871    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739575785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740111064Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740186861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740203361Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747848861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747973456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:55.360948    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748003155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748021254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748087351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748176348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748553033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.361041    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748726426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748830822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748853521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748872121Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361146    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748887020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361236    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748901420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361236    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748916819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361400    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748932318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361400    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748946618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361400    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748960717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361490    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748974817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748996916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749013215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749071613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361510    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749094412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361589    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749109411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361589    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749127511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361589    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749141410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749156310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749171209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361673    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749210407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749241106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749261705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:55.361755    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749287004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749377501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749401900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749458198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749553894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:55.361836    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749626691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:55.361945    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749759886Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:55.362035    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749839283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.362035    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749953278Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:55.362114    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749974077Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750421860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750811045Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751024636Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:55.362130    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751103833Z" level=info msg="containerd successfully booted in 0.052926s"
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.725111442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.993003995Z" level=info msg="Loading containers: start."
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.418709237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:55.362209    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.511990518Z" level=info msg="Loading containers: done."
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.539659513Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.540534438Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.598935417Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:55.362293    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.599463032Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:55.362378    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.764446334Z" level=info msg="Processing signal 'terminated'"
	I0501 04:16:55.362378    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 systemd[1]: Stopping Docker Application Container Engine...
	I0501 04:16:55.362417    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766325752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766547266Z" level=info msg="Daemon shutdown complete"
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766599570Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766627071Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0501 04:16:55.362442    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: docker.service: Deactivated successfully.
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Stopped Docker Application Container Engine.
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.848356633Z" level=info msg="Starting up"
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.852105170Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:16:55.362520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.856097222Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1051
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.886653253Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918280652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918435561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:16:55.362604    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918674977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:16:55.362701    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918835587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362701    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918914392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.362701    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919007298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362782    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919224411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.362782    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919342019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362782    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919363920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:16:55.362860    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919374921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362860    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919401422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362860    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919522430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362940    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922355909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.362940    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922472116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:16:55.362940    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922606725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:16:55.363018    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922701131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:16:55.363018    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922740333Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:16:55.363018    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922844740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922863441Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923199662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923345572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923371973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:16:55.363097    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923387074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:16:55.363194    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923416076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:16:55.363194    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923482380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:16:55.363194    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923717595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923914208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924012314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924084218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:16:55.363276    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924103120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363358    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924116520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363358    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924137922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363358    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924154823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363440    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924172824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363440    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924195925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363440    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924208026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924219327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924285031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363520    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924297632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363602    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924325534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363602    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363602    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924348235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363682    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924360536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363682    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924373137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363682    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924390538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363763    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924403039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363763    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924414139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363763    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924426140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924440741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924459642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924475143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924504745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924545247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924640554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:16:55.363857    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924658655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:16:55.364031    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924671555Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:16:55.364031    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924736560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:16:55.364120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924890569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:16:55.364120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924908370Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925252392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925540810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925606615Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925720522Z" level=info msg="containerd successfully booted in 0.040328s"
	I0501 04:16:55.364210    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.902259635Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:16:55.364293    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.938734241Z" level=info msg="Loading containers: start."
	I0501 04:16:55.364293    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.252276255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:16:55.364293    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.346319398Z" level=info msg="Loading containers: done."
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374198460Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374439776Z" level=info msg="Daemon has completed initialization"
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424572544Z" level=info msg="API listen on [::]:2376"
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424740154Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:16:55.364382    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0501 04:16:55.364470    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Loaded network plugin cni"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0501 04:16:55.364579    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0501 04:16:55.364716    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0501 04:16:55.364716    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0501 04:16:55.364716    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-8w9hq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88\""
	I0501 04:16:55.364803    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-cc6mk_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5\""
	I0501 04:16:55.364803    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-x9zrw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193\""
	I0501 04:16:55.364892    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.812954162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813140474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813251281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813750813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908552604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908932028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908977330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.909354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8e27176eab83655d3f2a52c63326669ef8c796c68155930f53f421789d826f1/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.364928    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022633513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022720619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022735220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.024008700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365153    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365271    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032104212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032117713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032205718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fd53aa8d8f5d6402b604adf1c8c8ae2b5a8c80b90e94152f45e7cb16a71fe46/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51e331e75da779107616d5efa0d497152d9c85407f1c172c9ae536bcc2b22bad/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.361204210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366294631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366382437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366929671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427356590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427966129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428178542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428971092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563334483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563717708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568278296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568462908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619028803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619423228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619676644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365295    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.620258481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.647452681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648388440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648703160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650660084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650945902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.365853    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.652733715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.653556567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703188303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703325612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703348713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.704951615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160153282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160628512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160751120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.161166246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303671652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303759357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304597710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304856126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623383256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623630372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623719877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.624154405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366123    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1045]: time="2024-05-01T04:16:15.086534690Z" level=info msg="ignoring event" container=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087315924Z" level=info msg="shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087789544Z" level=warning msg="cleaning up after shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.089400515Z" level=info msg="cleaning up dead shim" namespace=moby
	I0501 04:16:55.366712    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233206077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366830    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233350185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366865    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233373086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366865    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.235465402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366865    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.458837761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366947    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.459864323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464281891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464897329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543149980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543283788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543320690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543548404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598181021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598854262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.599065375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.600816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b85f507755ab5fd65a5328f5567d969dd5f974c01ee4c5d8e38f03dc6ec900a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.282921443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283150129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283743193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.291296831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360201124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360588900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360677995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.361100969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575166498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:16:55.366984    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575320589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.576248232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367571    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367716    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367780    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367805    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367805    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367851    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367893    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.367893    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.368063    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:16:55.406201    4352 logs.go:123] Gathering logs for coredns [3e8d5ff9a9e4] ...
	I0501 04:16:55.406201    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8d5ff9a9e4"
	I0501 04:16:55.441447    4352 command_runner.go:130] > .:53
	I0501 04:16:55.441447    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:55.441447    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:55.441447    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	I0501 04:16:55.441658    4352 command_runner.go:130] > [INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	I0501 04:16:55.441730    4352 command_runner.go:130] > [INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	I0501 04:16:55.441730    4352 command_runner.go:130] > [INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	I0501 04:16:55.441791    4352 command_runner.go:130] > [INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	I0501 04:16:55.441880    4352 command_runner.go:130] > [INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	I0501 04:16:55.441880    4352 command_runner.go:130] > [INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	I0501 04:16:55.441925    4352 command_runner.go:130] > [INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	I0501 04:16:55.441925    4352 command_runner.go:130] > [INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	I0501 04:16:55.441968    4352 command_runner.go:130] > [INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	I0501 04:16:55.441968    4352 command_runner.go:130] > [INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	I0501 04:16:55.441968    4352 command_runner.go:130] > [INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	I0501 04:16:55.442011    4352 command_runner.go:130] > [INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	I0501 04:16:55.442011    4352 command_runner.go:130] > [INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	I0501 04:16:55.442011    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:55.442069    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:55.444176    4352 logs.go:123] Gathering logs for coredns [15c4496e3a9f] ...
	I0501 04:16:55.444211    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15c4496e3a9f"
	I0501 04:16:55.477397    4352 command_runner.go:130] > .:53
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:55.477478    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:55.477478    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:55.477478    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:55.479510    4352 logs.go:123] Gathering logs for kube-scheduler [eaf69fce5ee3] ...
	I0501 04:16:55.479541    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaf69fce5ee3"
	I0501 04:16:55.510830    4352 command_runner.go:130] ! I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:55.511324    4352 command_runner.go:130] ! W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:55.511401    4352 command_runner.go:130] ! W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:55.511401    4352 command_runner.go:130] ! W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.511401    4352 command_runner.go:130] ! I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.514328    4352 logs.go:123] Gathering logs for kube-scheduler [06f1f84bfde1] ...
	I0501 04:16:55.514328    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f1f84bfde1"
	I0501 04:16:55.550601    4352 command_runner.go:130] ! I0501 03:52:10.476758       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175551       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175587       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.175678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.246151       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.246312       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.251800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.252170       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.253709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! I0501 03:52:12.254160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.257352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.257411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.261549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.261670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.263856       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.263906       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.270038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.270597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.271080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! E0501 03:52:12.271309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.550678    4352 command_runner.go:130] ! W0501 03:52:12.271808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.551240    4352 command_runner.go:130] ! E0501 03:52:12.272098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.551291    4352 command_runner.go:130] ! W0501 03:52:12.272396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.551291    4352 command_runner.go:130] ! W0501 03:52:12.273177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.551356    4352 command_runner.go:130] ! E0501 03:52:12.273396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.551393    4352 command_runner.go:130] ! W0501 03:52:12.273765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.273964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.274273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.274741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.275083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.275448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.275752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.276841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.278071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.277438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.278555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.279824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.280326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! W0501 03:52:12.280272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.551455    4352 command_runner.go:130] ! E0501 03:52:12.280893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.551917    4352 command_runner.go:130] ! W0501 03:52:13.100723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551969    4352 command_runner.go:130] ! E0501 03:52:13.101238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.551969    4352 command_runner.go:130] ! W0501 03:52:13.102451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.551969    4352 command_runner.go:130] ! E0501 03:52:13.102804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:55.552081    4352 command_runner.go:130] ! W0501 03:52:13.188414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.552081    4352 command_runner.go:130] ! E0501 03:52:13.188662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:55.552139    4352 command_runner.go:130] ! W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.552139    4352 command_runner.go:130] ! E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:55.552202    4352 command_runner.go:130] ! W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.552238    4352 command_runner.go:130] ! E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:55.552238    4352 command_runner.go:130] ! W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.552238    4352 command_runner.go:130] ! E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:55.552332    4352 command_runner.go:130] ! W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.552370    4352 command_runner.go:130] ! E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:55.552370    4352 command_runner.go:130] ! W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.552419    4352 command_runner.go:130] ! E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:55.552456    4352 command_runner.go:130] ! W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552508    4352 command_runner.go:130] ! E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552508    4352 command_runner.go:130] ! W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552542    4352 command_runner.go:130] ! E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552579    4352 command_runner.go:130] ! W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.552613    4352 command_runner.go:130] ! E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:55.552668    4352 command_runner.go:130] ! W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.552710    4352 command_runner.go:130] ! E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:55.552763    4352 command_runner.go:130] ! W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552805    4352 command_runner.go:130] ! E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:55.552805    4352 command_runner.go:130] ! W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.552873    4352 command_runner.go:130] ! E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:55.552909    4352 command_runner.go:130] ! W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.552909    4352 command_runner.go:130] ! E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:55.552951    4352 command_runner.go:130] ! I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:55.552951    4352 command_runner.go:130] ! E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	I0501 04:16:55.565171    4352 logs.go:123] Gathering logs for kube-proxy [3efcc92f817e] ...
	I0501 04:16:55.565171    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efcc92f817e"
	I0501 04:16:55.596340    4352 command_runner.go:130] ! I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:55.596430    4352 command_runner.go:130] ! I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:16:55.596688    4352 command_runner.go:130] ! I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:55.596719    4352 command_runner.go:130] ! I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:55.599371    4352 logs.go:123] Gathering logs for kube-controller-manager [4b62556f40be] ...
	I0501 04:16:55.599450    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b62556f40be"
	I0501 04:16:55.632589    4352 command_runner.go:130] ! I0501 03:52:09.899238       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.399398       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.399463       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.408364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.409326       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.409600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:10.409803       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.177592       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.177638       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.223373       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.223482       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.224504       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.255847       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:55.632688    4352 command_runner.go:130] ! I0501 03:52:15.268264       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:55.633373    4352 command_runner.go:130] ! I0501 03:52:15.268388       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.282022       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.318646       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.318861       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.319086       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.319104       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.319092       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.340327       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.340404       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.340939       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.388809       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.389274       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.389544       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.409254       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.409799       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.410052       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.410231       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.430420       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.432551       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.432922       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.433117       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:55.633437    4352 command_runner.go:130] ! E0501 03:52:15.460293       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:55.633437    4352 command_runner.go:130] ! I0501 03:52:15.460569       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:55.634091    4352 command_runner.go:130] ! I0501 03:52:15.483810       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:55.634285    4352 command_runner.go:130] ! I0501 03:52:15.484552       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:55.634368    4352 command_runner.go:130] ! I0501 03:52:15.487659       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:55.634477    4352 command_runner.go:130] ! I0501 03:52:15.507112       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:55.634681    4352 command_runner.go:130] ! I0501 03:52:15.507311       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:55.634812    4352 command_runner.go:130] ! I0501 03:52:15.507323       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:55.634901    4352 command_runner.go:130] ! I0501 03:52:15.547225       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.547300       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.547313       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.547413       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.652954       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.653222       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.653240       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940714       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.940787       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:55.634969    4352 command_runner.go:130] ! I0501 03:52:15.941029       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:55.635497    4352 command_runner.go:130] ! I0501 03:52:15.941118       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:55.635617    4352 command_runner.go:130] ! I0501 03:52:15.941275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941344       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941368       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941421       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941627       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.941813       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.942150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:55.635781    4352 command_runner.go:130] ! I0501 03:52:15.942270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:55.636433    4352 command_runner.go:130] ! I0501 03:52:15.942319       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:55.636549    4352 command_runner.go:130] ! I0501 03:52:15.942400       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:15.942767       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:15.942791       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.183841       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.184178       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.187151       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.187185       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.436175       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.436331       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.436346       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.586198       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.586602       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.586642       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736534       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736573       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736609       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736694       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.736706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.891482       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.891648       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:16.891663       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.047956       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050852       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050942       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:55.636695    4352 command_runner.go:130] ! I0501 03:52:17.050952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:55.637219    4352 command_runner.go:130] ! I0501 03:52:17.051046       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:55.637219    4352 command_runner.go:130] ! I0501 03:52:17.051073       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051107       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051130       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051145       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051548       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.051654       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.186932       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:17.187092       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.350786       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.351166       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.352026       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.353715       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.368884       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.369241       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.369602       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.424182       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.424472       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.436663       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.437080       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.437177       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.448635       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.449170       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.449409       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.475565       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:55.637299    4352 command_runner.go:130] ! I0501 03:52:27.476051       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:55.637869    4352 command_runner.go:130] ! I0501 03:52:27.476166       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.479486       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.479596       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.479975       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.480750       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.480823       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:55.637913    4352 command_runner.go:130] ! E0501 03:52:27.482546       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.483210       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.495640       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:55.637913    4352 command_runner.go:130] ! I0501 03:52:27.495973       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:55.638542    4352 command_runner.go:130] ! I0501 03:52:27.496212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:55.638662    4352 command_runner.go:130] ! I0501 03:52:27.512223       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:55.638662    4352 command_runner.go:130] ! I0501 03:52:27.512895       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.513075       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.514982       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.515311       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.515499       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.526940       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.527318       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.527351       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647646       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647752       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647825       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.647836       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.692531       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.692762       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.693221       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.693310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.846904       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.847065       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.847083       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.996304       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.996661       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:27.996720       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.149439       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.149690       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.149796       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.194448       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.194582       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.346263       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:55.638761    4352 command_runner.go:130] ! I0501 03:52:28.351074       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.351267       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.389327       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.399508       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:55.639405    4352 command_runner.go:130] ! I0501 03:52:28.401911       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:55.639599    4352 command_runner.go:130] ! I0501 03:52:28.402772       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:55.639672    4352 command_runner.go:130] ! I0501 03:52:28.414043       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:55.639969    4352 command_runner.go:130] ! I0501 03:52:28.415874       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:55.640124    4352 command_runner.go:130] ! I0501 03:52:28.427291       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:55.640201    4352 command_runner.go:130] ! I0501 03:52:28.436570       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.437221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.437315       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.440984       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.447483       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.447500       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.448218       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451115       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451224       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451726       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451933       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.451734       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.470928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.476835       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.486851       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.487294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.507418       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.510921       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.537591       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.575135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.595083       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.609954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.621070       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.625042       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.628085       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.643871       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.653497       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.654871       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.654996       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.655710       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.655972       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:55.640268    4352 command_runner.go:130] ! I0501 03:52:28.656192       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:55.640857    4352 command_runner.go:130] ! I0501 03:52:28.675109       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800" podCIDRs=["10.244.0.0/24"]
	I0501 04:16:55.640857    4352 command_runner.go:130] ! I0501 03:52:28.682120       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:55.640857    4352 command_runner.go:130] ! I0501 03:52:28.682644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.682782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.682855       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.688787       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.693874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:28.697526       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.088696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.088746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.139257       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:55.640950    4352 command_runner.go:130] ! I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 04:16:55.641555    4352 command_runner.go:130] ! I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:55.641555    4352 command_runner.go:130] ! I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641666    4352 command_runner.go:130] ! I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 04:16:55.641884    4352 command_runner.go:130] ! I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.641911    4352 command_runner.go:130] ! I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:55.663350    4352 logs.go:123] Gathering logs for kindnet [b7cae3f6b88b] ...
	I0501 04:16:55.663350    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7cae3f6b88b"
	I0501 04:16:55.694512    4352 command_runner.go:130] ! I0501 04:15:45.341459       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 04:16:55.695273    4352 command_runner.go:130] ! I0501 04:15:45.342196       1 main.go:107] hostIP = 172.28.209.199
	I0501 04:16:55.695338    4352 command_runner.go:130] ! podIP = 172.28.209.199
	I0501 04:16:55.695338    4352 command_runner.go:130] ! I0501 04:15:45.343348       1 main.go:116] setting mtu 1500 for CNI 
	I0501 04:16:55.695338    4352 command_runner.go:130] ! I0501 04:15:45.343391       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:15:45.343412       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.765193       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.817499       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.817549       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818026       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818042       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818289       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.219.162 Flags: [] Table: 0} 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818416       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818477       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:15.818548       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.834949       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.834995       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835008       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835016       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835192       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:25.835220       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845752       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845835       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845848       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.845856       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.846322       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:35.846423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855212       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855323       1 main.go:227] handling current node
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855339       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.855347       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.856266       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:55.695373    4352 command_runner.go:130] ! I0501 04:16:45.856305       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:55.698228    4352 logs.go:123] Gathering logs for container status ...
	I0501 04:16:55.698490    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 04:16:55.777760    4352 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0501 04:16:55.777760    4352 command_runner.go:130] > 1efd236274eb6       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	I0501 04:16:55.777760    4352 command_runner.go:130] > b8a9b405d76be       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:55.777926    4352 command_runner.go:130] > 8a0208aeafcfe       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:55.777987    4352 command_runner.go:130] > 239a5dfd3ae52       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	I0501 04:16:55.777987    4352 command_runner.go:130] > b7cae3f6b88bc       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	I0501 04:16:55.778146    4352 command_runner.go:130] > 01deddefba52a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	I0501 04:16:55.778146    4352 command_runner.go:130] > 3efcc92f817ee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	I0501 04:16:55.778253    4352 command_runner.go:130] > 34892fdb68983       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	I0501 04:16:55.778253    4352 command_runner.go:130] > 18cd30f3ad28f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	I0501 04:16:55.778403    4352 command_runner.go:130] > 66a1b89e6733f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	I0501 04:16:55.778403    4352 command_runner.go:130] > eaf69fce5ee36       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	I0501 04:16:55.778519    4352 command_runner.go:130] > 237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	I0501 04:16:55.778519    4352 command_runner.go:130] > 15c4496e3a9f0       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	I0501 04:16:55.778519    4352 command_runner.go:130] > 3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	I0501 04:16:55.778651    4352 command_runner.go:130] > 6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago       Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	I0501 04:16:55.778651    4352 command_runner.go:130] > 502684407b0cf       a0bf559e280cf                                                                                         24 minutes ago       Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	I0501 04:16:55.778766    4352 command_runner.go:130] > 4b62556f40bec       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	I0501 04:16:55.778880    4352 command_runner.go:130] > 06f1f84bfde17       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	I0501 04:16:55.783727    4352 logs.go:123] Gathering logs for kubelet ...
	I0501 04:16:55.783967    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 04:16:55.823599    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823639    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875075    1383 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823639    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875223    1383 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823639    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.876800    1383 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: E0501 04:15:32.877636    1383 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0501 04:16:55.823739    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823818    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823843    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.593311    1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.595065    1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.597316    1424 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: E0501 04:15:33.597441    1424 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327211    1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327674    1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.328505    1461 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: E0501 04:15:34.328669    1461 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.796836    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797219    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797640    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.799493    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.812278    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846443    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846668    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847577    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847671    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-289800","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0501 04:16:55.823875    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848600    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0501 04:16:55.824394    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848674    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0501 04:16:55.824394    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.849347    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:55.824445    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851250    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0501 04:16:55.824445    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851388    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0501 04:16:55.824498    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851480    1525 kubelet.go:312] "Adding apiserver pod source"
	I0501 04:16:55.824524    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.852014    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0501 04:16:55.824560    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.863109    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0501 04:16:55.824560    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.868847    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0501 04:16:55.824617    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869729    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0501 04:16:55.824686    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.870640    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.871055    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869620    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.872992    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872208    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.874268    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872162    1525 server.go:1264] "Started kubelet"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.876600    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.878390    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.882899    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.888275    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.209.199:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-289800.17cb4242948ce646  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-289800,UID:multinode-289800,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-289800,},FirstTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,LastTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-2
89800,}"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.894478    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.899264    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="200ms"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900556    1525 factory.go:221] Registration of the systemd container factory successfully
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900703    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900931    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.909390    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.922744    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.923300    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961054    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0501 04:16:55.824716    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961177    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961311    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962539    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962613    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962649    1525 policy_none.go:49] "None policy: Start"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.965264    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.981258    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0501 04:16:55.825257    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.991286    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0501 04:16:55.825395    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.994410    1525 state_mem.go:75] "Updated machine memory state"
	I0501 04:16:55.825395    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.001037    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0501 04:16:55.825438    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.005977    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0501 04:16:55.825438    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.012301    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.825513    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.018582    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0501 04:16:55.825513    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.020477    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0501 04:16:55.825578    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.020620    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.825608    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.021548    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-289800\" not found"
	I0501 04:16:55.825638    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022495    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0501 04:16:55.825672    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022690    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0501 04:16:55.825672    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022715    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0501 04:16:55.825733    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.022919    1525 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0501 04:16:55.825775    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.028696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.825825    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.028755    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.045316    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:55.825870    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:55.825980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:55.825980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.102048    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="400ms"
	I0501 04:16:55.825980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.124062    1525 topology_manager.go:215] "Topology Admit Handler" podUID="44d7830a7c97b8c7e460c0508d02be4e" podNamespace="kube-system" podName="kube-scheduler-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.125237    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8b70cd8d31103a1cfca45e9856766786" podNamespace="kube-system" podName="kube-apiserver-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.126693    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a17001fd2508d58fea9b1ae465b65254" podNamespace="kube-system" podName="kube-controller-manager-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.129279    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b12e9024402f49cfac7440d6a2eaf42d" podNamespace="kube-system" podName="etcd-multinode-289800"
	I0501 04:16:55.826076    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132159    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479b3ec741befe4b1eddeb02949bcd198e18fa7dc4c196283e811e273e4edcbd"
	I0501 04:16:55.826180    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132205    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88"
	I0501 04:16:55.826217    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132217    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df6ba73bcf683d21156e67827524b826f94059250b12cf08abd23da8345923a"
	I0501 04:16:55.826252    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132236    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a338ea43bd9b03a0a56c5b614e36fd54cdd707fb4c2f5819a814e4ffd9bdcb65"
	I0501 04:16:55.826252    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.139102    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a1c5b5cdd65332e27f08445a684fc2d2f586ab1b8a2fb2c5c0dfc02b71165"
	I0501 04:16:55.826326    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.158602    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737"
	I0501 04:16:55.826357    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.174190    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bb6a06ed527b42fe74673579e4a788915c66cd3717c52a344c73e0b7d12b34"
	I0501 04:16:55.826357    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.191042    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5"
	I0501 04:16:55.826408    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.208222    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193"
	I0501 04:16:55.826450    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214646    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-ca-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826507    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214710    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-k8s-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826551    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214752    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-kubeconfig\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826604    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214812    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.826604    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214855    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-data\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:55.826649    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214875    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d7830a7c97b8c7e460c0508d02be4e-kubeconfig\") pod \"kube-scheduler-multinode-289800\" (UID: \"44d7830a7c97b8c7e460c0508d02be4e\") " pod="kube-system/kube-scheduler-multinode-289800"
	I0501 04:16:55.826693    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214899    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-ca-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.826729    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214925    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-k8s-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.826801    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214950    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-flexvolume-dir\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826848    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214973    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:16:55.826917    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214994    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-certs\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:16:55.826917    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.222614    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.826917    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.223837    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.826980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.227891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8"
	I0501 04:16:55.826980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.504248    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="800ms"
	I0501 04:16:55.826980    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.625269    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.827080    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.625998    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.827124    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.852634    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827158    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.852740    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827211    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.063749    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827254    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.063859    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827352    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.260487    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93"
	I0501 04:16:55.827398    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.306204    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="1.6s"
	I0501 04:16:55.827398    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.357883    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827481    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.357983    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827522    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.424248    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827559    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.424377    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:16:55.827559    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.428960    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.431040    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:40 multinode-289800 kubelet[1525]: I0501 04:15:40.032371    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.639150    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.640030    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-289800"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.642970    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.644297    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.646032    1525 setters.go:580] "Node became not ready" node="multinode-289800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-01T04:15:42Z","lastTransitionTime":"2024-05-01T04:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.869832    1525 apiserver.go:52] "Watching apiserver"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875356    1525 topology_manager.go:215] "Topology Admit Handler" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w9hq"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875613    1525 topology_manager.go:215] "Topology Admit Handler" podUID="aba82e50-b8f8-40b4-b08a-6d045314d6b6" podNamespace="kube-system" podName="kube-proxy-bp9zx"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875753    1525 topology_manager.go:215] "Topology Admit Handler" podUID="0b91b14d-bed3-4889-b193-db53daccd395" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9zrw"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875936    1525 topology_manager.go:215] "Topology Admit Handler" podUID="72ef61d4-4437-40da-86e7-4d7eb386b6de" podNamespace="kube-system" podName="kindnet-vcxkr"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876061    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5" podNamespace="kube-system" podName="storage-provisioner"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876192    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.876527    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.877384    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.878678    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.879595    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.884364    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.827642    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.910944    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0501 04:16:55.828255    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.938877    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-xtables-lock\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:55.828255    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939029    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8d2a827-d9a6-419a-a076-c7695a16a2b5-tmp\") pod \"storage-provisioner\" (UID: \"b8d2a827-d9a6-419a-a076-c7695a16a2b5\") " pod="kube-system/storage-provisioner"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939149    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-xtables-lock\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-cni-cfg\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939318    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-lib-modules\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-lib-modules\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940207    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940401    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440364296 +0000 UTC m=+6.726863016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940680    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940822    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440808324 +0000 UTC m=+6.727307144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.948736    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-289800"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.958916    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975690    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975737    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.475811436 +0000 UTC m=+6.762310156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.052812    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17e9f88f256f5527a6565eb2da75f63" path="/var/lib/kubelet/pods/c17e9f88f256f5527a6565eb2da75f63/volumes"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.054400    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7b6f2a7c826774b66af910f598e965" path="/var/lib/kubelet/pods/fc7b6f2a7c826774b66af910f598e965/volumes"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170146    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-289800" podStartSLOduration=1.170112215 podStartE2EDuration="1.170112215s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.140058816 +0000 UTC m=+6.426557536" watchObservedRunningTime="2024-05-01 04:15:43.170112215 +0000 UTC m=+6.456610935"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170304    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-289800" podStartSLOduration=1.170298327 podStartE2EDuration="1.170298327s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.16893474 +0000 UTC m=+6.455433460" watchObservedRunningTime="2024-05-01 04:15:43.170298327 +0000 UTC m=+6.456797147"
	I0501 04:16:55.828316    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444132    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.828896    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444229    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444209637 +0000 UTC m=+7.730708457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.828896    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444591    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.829044    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444633    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444622763 +0000 UTC m=+7.731121483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.829088    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.544921    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829088    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545047    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545141    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.545110913 +0000 UTC m=+7.831609633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.039213    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.378804    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.401946    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402229    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402476    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.403391    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454688    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454983    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.454902809 +0000 UTC m=+9.741401629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455515    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.829146    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455560    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.45554895 +0000 UTC m=+9.742047670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.829719    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555732    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829719    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555836    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.829985    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555920    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.55587479 +0000 UTC m=+9.842373510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830109    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028227    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.830287    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028491    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.830355    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.023829    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.830478    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486637    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486963    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.486942526 +0000 UTC m=+13.773441346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.488686    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.489077    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.488847647 +0000 UTC m=+13.775346467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587833    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587977    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.588185    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.588160623 +0000 UTC m=+13.874659443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.027084    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.028397    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:48 multinode-289800 kubelet[1525]: E0501 04:15:48.022969    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.024347    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.025248    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.024175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523387    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523508    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.523488538 +0000 UTC m=+21.809987358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.830566    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524104    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.831118    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524150    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.524137716 +0000 UTC m=+21.810636436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.831240    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.624897    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.831329    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625357    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.831440    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625742    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.625719971 +0000 UTC m=+21.912218691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.831506    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024464    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.831614    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024959    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.831682    4352 command_runner.go:130] > May 01 04:15:52 multinode-289800 kubelet[1525]: E0501 04:15:52.024016    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.831788    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.023669    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.831917    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.024381    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.831939    4352 command_runner.go:130] > May 01 04:15:54 multinode-289800 kubelet[1525]: E0501 04:15:54.023529    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.023399    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.024039    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:56 multinode-289800 kubelet[1525]: E0501 04:15:56.023961    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.024583    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.025562    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.024494    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606520    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606584    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.606569125 +0000 UTC m=+37.893067945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607052    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607095    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.607084827 +0000 UTC m=+37.893583547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.707959    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.832105    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708171    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.832705    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708240    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.708221599 +0000 UTC m=+37.994720419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.832801    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.024158    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832924    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.025055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:00 multinode-289800 kubelet[1525]: E0501 04:16:00.023216    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.024905    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.025585    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:02 multinode-289800 kubelet[1525]: E0501 04:16:02.024143    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.023409    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.024062    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:04 multinode-289800 kubelet[1525]: E0501 04:16:04.023182    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.028055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.029254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:06 multinode-289800 kubelet[1525]: E0501 04:16:06.024522    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.024384    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.832996    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.025431    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833527    4352 command_runner.go:130] > May 01 04:16:08 multinode-289800 kubelet[1525]: E0501 04:16:08.024168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833605    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.024117    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833678    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.025560    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:10 multinode-289800 kubelet[1525]: E0501 04:16:10.023881    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.023619    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.024277    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:12 multinode-289800 kubelet[1525]: E0501 04:16:12.024236    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023153    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023926    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.023335    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657138    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657461    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.657440103 +0000 UTC m=+69.943938823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657218    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657858    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.65783162 +0000 UTC m=+69.944330440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758303    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758421    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.833755    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758487    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.758469083 +0000 UTC m=+70.044967903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:16:55.834286    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.023369    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:16:55.834521    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.024797    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:16:55.834598    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.886834    1525 scope.go:117] "RemoveContainer" containerID="ee2238f98e350e8d80528b60fc5b614ce6048d8b34af2034a9947e26d8e6beab"
	I0501 04:16:55.834598    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.887225    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:55.834664    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.887510    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b8d2a827-d9a6-419a-a076-c7695a16a2b5)\"" pod="kube-system/storage-provisioner" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: E0501 04:16:16.024360    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: I0501 04:16:16.618138    1525 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 kubelet[1525]: I0501 04:16:29.024408    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.040204    1525 scope.go:117] "RemoveContainer" containerID="3244d1ee5ab428faf09a962609f2c940c36a998727a01b873d382eb5ee600ca3"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	I0501 04:16:55.834714    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	I0501 04:16:55.892462    4352 logs.go:123] Gathering logs for kube-apiserver [18cd30f3ad28] ...
	I0501 04:16:55.892462    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd30f3ad28"
	I0501 04:16:55.927845    4352 command_runner.go:130] ! I0501 04:15:39.445795       1 options.go:221] external host was not specified, using 172.28.209.199
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:39.453956       1 server.go:148] Version: v1.30.0
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:39.454357       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:40.258184       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:40.258591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:55.928388    4352 command_runner.go:130] ! I0501 04:15:40.260085       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 04:16:55.928802    4352 command_runner.go:130] ! I0501 04:15:40.260405       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:40.261810       1 instance.go:299] Using reconciler: lease
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:40.801281       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0501 04:16:55.928853    4352 command_runner.go:130] ! W0501 04:15:40.801386       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:41.090803       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0501 04:16:55.928853    4352 command_runner.go:130] ! I0501 04:15:41.091252       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0501 04:16:55.929012    4352 command_runner.go:130] ! I0501 04:15:41.359171       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0501 04:16:55.929113    4352 command_runner.go:130] ! I0501 04:15:41.532740       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0501 04:16:55.929153    4352 command_runner.go:130] ! I0501 04:15:41.570911       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0501 04:16:55.929198    4352 command_runner.go:130] ! W0501 04:15:41.571018       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929198    4352 command_runner.go:130] ! W0501 04:15:41.571046       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929360    4352 command_runner.go:130] ! I0501 04:15:41.571875       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0501 04:16:55.929481    4352 command_runner.go:130] ! W0501 04:15:41.572053       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929536    4352 command_runner.go:130] ! I0501 04:15:41.573317       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0501 04:16:55.929536    4352 command_runner.go:130] ! I0501 04:15:41.574692       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.574726       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.574734       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.576633       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.576726       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.577645       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.577739       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.577748       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.578543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.578618       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.578731       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.579623       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.582482       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.582572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.582581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.583284       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.583417       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.583428       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.585084       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.585203       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.588956       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.589055       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.589067       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! I0501 04:15:41.589951       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.590056       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.929597    4352 command_runner.go:130] ! W0501 04:15:41.590066       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.930143    4352 command_runner.go:130] ! I0501 04:15:41.593577       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0501 04:16:55.930143    4352 command_runner.go:130] ! W0501 04:15:41.593674       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930312    4352 command_runner.go:130] ! W0501 04:15:41.593684       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.930389    4352 command_runner.go:130] ! I0501 04:15:41.595694       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0501 04:16:55.930389    4352 command_runner.go:130] ! I0501 04:15:41.597680       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0501 04:16:55.930509    4352 command_runner.go:130] ! W0501 04:15:41.597864       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0501 04:16:55.930570    4352 command_runner.go:130] ! W0501 04:15:41.597875       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930570    4352 command_runner.go:130] ! I0501 04:15:41.603955       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0501 04:16:55.930644    4352 command_runner.go:130] ! W0501 04:15:41.604059       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0501 04:16:55.930644    4352 command_runner.go:130] ! W0501 04:15:41.604069       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0501 04:16:55.930709    4352 command_runner.go:130] ! I0501 04:15:41.607445       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0501 04:16:55.930709    4352 command_runner.go:130] ! W0501 04:15:41.607533       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930786    4352 command_runner.go:130] ! W0501 04:15:41.607543       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0501 04:16:55.930786    4352 command_runner.go:130] ! I0501 04:15:41.608797       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0501 04:16:55.930851    4352 command_runner.go:130] ! W0501 04:15:41.608817       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930911    4352 command_runner.go:130] ! I0501 04:15:41.625599       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0501 04:16:55.930911    4352 command_runner.go:130] ! W0501 04:15:41.625618       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:16:55.930991    4352 command_runner.go:130] ! I0501 04:15:42.332139       1 secure_serving.go:213] Serving securely on [::]:8443
	I0501 04:16:55.930991    4352 command_runner.go:130] ! I0501 04:15:42.332337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:55.931053    4352 command_runner.go:130] ! I0501 04:15:42.332595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.931241    4352 command_runner.go:130] ! I0501 04:15:42.333006       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0501 04:16:55.931293    4352 command_runner.go:130] ! I0501 04:15:42.333577       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 04:16:55.931361    4352 command_runner.go:130] ! I0501 04:15:42.333909       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:55.931361    4352 command_runner.go:130] ! I0501 04:15:42.334990       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 04:16:55.931361    4352 command_runner.go:130] ! I0501 04:15:42.335027       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 04:16:55.931429    4352 command_runner.go:130] ! I0501 04:15:42.335107       1 aggregator.go:163] waiting for initial CRD sync...
	I0501 04:16:55.931429    4352 command_runner.go:130] ! I0501 04:15:42.335378       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0501 04:16:55.931513    4352 command_runner.go:130] ! I0501 04:15:42.335424       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0501 04:16:55.931513    4352 command_runner.go:130] ! I0501 04:15:42.335517       1 available_controller.go:423] Starting AvailableConditionController
	I0501 04:16:55.931576    4352 command_runner.go:130] ! I0501 04:15:42.335533       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 04:16:55.931576    4352 command_runner.go:130] ! I0501 04:15:42.335556       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0501 04:16:55.931640    4352 command_runner.go:130] ! I0501 04:15:42.337835       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0501 04:16:55.931640    4352 command_runner.go:130] ! I0501 04:15:42.338196       1 controller.go:116] Starting legacy_token_tracking_controller
	I0501 04:16:55.931702    4352 command_runner.go:130] ! I0501 04:15:42.338360       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0501 04:16:55.931757    4352 command_runner.go:130] ! I0501 04:15:42.338519       1 controller.go:78] Starting OpenAPI AggregationController
	I0501 04:16:55.931757    4352 command_runner.go:130] ! I0501 04:15:42.339167       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0501 04:16:55.931819    4352 command_runner.go:130] ! I0501 04:15:42.339360       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0501 04:16:55.931819    4352 command_runner.go:130] ! I0501 04:15:42.339853       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 04:16:55.931875    4352 command_runner.go:130] ! I0501 04:15:42.361139       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 04:16:55.931938    4352 command_runner.go:130] ! I0501 04:15:42.361155       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 04:16:55.931938    4352 command_runner.go:130] ! I0501 04:15:42.361192       1 controller.go:139] Starting OpenAPI controller
	I0501 04:16:55.931994    4352 command_runner.go:130] ! I0501 04:15:42.361219       1 controller.go:87] Starting OpenAPI V3 controller
	I0501 04:16:55.931994    4352 command_runner.go:130] ! I0501 04:15:42.361233       1 naming_controller.go:291] Starting NamingConditionController
	I0501 04:16:55.931994    4352 command_runner.go:130] ! I0501 04:15:42.361253       1 establishing_controller.go:76] Starting EstablishingController
	I0501 04:16:55.932081    4352 command_runner.go:130] ! I0501 04:15:42.361274       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 04:16:55.932139    4352 command_runner.go:130] ! I0501 04:15:42.361288       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 04:16:55.932139    4352 command_runner.go:130] ! I0501 04:15:42.361301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 04:16:55.932203    4352 command_runner.go:130] ! I0501 04:15:42.395816       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:55.932203    4352 command_runner.go:130] ! I0501 04:15:42.396242       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:55.932203    4352 command_runner.go:130] ! I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:16:55.932270    4352 command_runner.go:130] ! I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:16:55.932270    4352 command_runner.go:130] ! I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:16:55.932335    4352 command_runner.go:130] ! I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:16:55.932392    4352 command_runner.go:130] ! I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:16:55.932392    4352 command_runner.go:130] ! I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:16:55.932392    4352 command_runner.go:130] ! I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:16:55.932455    4352 command_runner.go:130] ! I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:16:55.932512    4352 command_runner.go:130] ! I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:16:55.932512    4352 command_runner.go:130] ! I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:16:55.932512    4352 command_runner.go:130] ! I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:16:55.932576    4352 command_runner.go:130] ! I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:16:55.932576    4352 command_runner.go:130] ! I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:16:55.932640    4352 command_runner.go:130] ! I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:16:55.932701    4352 command_runner.go:130] ! I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:16:55.932701    4352 command_runner.go:130] ! I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:16:55.932772    4352 command_runner.go:130] ! I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 04:16:55.932772    4352 command_runner.go:130] ! W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:16:55.932837    4352 command_runner.go:130] ! I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:16:55.932837    4352 command_runner.go:130] ! I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:16:55.932893    4352 command_runner.go:130] ! I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:16:55.932969    4352 command_runner.go:130] ! I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:16:55.932969    4352 command_runner.go:130] ! I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:16:55.933024    4352 command_runner.go:130] ! I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:16:55.933024    4352 command_runner.go:130] ! I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 04:16:55.942402    4352 logs.go:123] Gathering logs for etcd [34892fdb6898] ...
	I0501 04:16:55.942402    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34892fdb6898"
	I0501 04:16:55.972277    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.997417Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:55.972776    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998475Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.209.199:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.209.199:2380","--initial-cluster=multinode-289800=https://172.28.209.199:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.209.199:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.209.199:2380","--name=multinode-289800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0501 04:16:55.973134    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998558Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.998588Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998599Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.209.199:2380"]}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998626Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.006405Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"]}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.007658Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-289800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.030589Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.951987ms"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.081537Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","commit-index":2020}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=()"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became follower at term 2"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fe483b81e7b7d166 [peers: [], term: 2, commit: 2020, applied: 0, lastindex: 2020, lastterm: 2]"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:39.121672Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.127575Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1352}
	I0501 04:16:55.973181    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.132217Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1744}
	I0501 04:16:55.973777    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.144206Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0501 04:16:55.973777    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.15993Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"fe483b81e7b7d166","timeout":"7s"}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"fe483b81e7b7d166"}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160545Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fe483b81e7b7d166","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.16402Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0501 04:16:55.973841    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.165851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0501 04:16:55.973956    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0501 04:16:55.973998    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0501 04:16:55.973998    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	I0501 04:16:55.974052    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	I0501 04:16:55.974094    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	I0501 04:16:55.974094    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0501 04:16:55.974139    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:16:55.974238    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0501 04:16:55.974238    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0501 04:16:55.974291    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	I0501 04:16:55.974291    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	I0501 04:16:55.974332    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	I0501 04:16:55.974332    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	I0501 04:16:55.974369    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	I0501 04:16:55.974419    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	I0501 04:16:55.974419    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	I0501 04:16:55.974456    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	I0501 04:16:55.974456    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	I0501 04:16:55.974505    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	I0501 04:16:55.974543    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:55.974543    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:16:55.974543    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	I0501 04:16:55.974584    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0501 04:16:55.974584    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0501 04:16:55.974622    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0501 04:16:55.982199    4352 logs.go:123] Gathering logs for kindnet [6d5f881ef398] ...
	I0501 04:16:55.982199    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d5f881ef398"
	I0501 04:16:56.024418    4352 command_runner.go:130] ! I0501 04:01:59.122485       1 main.go:227] handling current node
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122501       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122510       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122690       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:01:59.122722       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153658       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153775       1 main.go:227] handling current node
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153793       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025455    4352 command_runner.go:130] ! I0501 04:02:09.153946       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:09.153980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161031       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161061       1 main.go:227] handling current node
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161079       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025613    4352 command_runner.go:130] ! I0501 04:02:19.161177       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:19.161185       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181653       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181721       1 main.go:227] handling current node
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181735       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025765    4352 command_runner.go:130] ! I0501 04:02:29.181742       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:29.182277       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:29.182369       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.195902       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196079       1 main.go:227] handling current node
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196095       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196105       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196558       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:39.196649       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:49.209858       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.025849    4352 command_runner.go:130] ! I0501 04:02:49.209973       1 main.go:227] handling current node
	I0501 04:16:56.026422    4352 command_runner.go:130] ! I0501 04:02:49.210027       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026422    4352 command_runner.go:130] ! I0501 04:02:49.210041       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.026512    4352 command_runner.go:130] ! I0501 04:02:49.210461       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.026512    4352 command_runner.go:130] ! I0501 04:02:49.210617       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.026512    4352 command_runner.go:130] ! I0501 04:02:59.219550       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.026553    4352 command_runner.go:130] ! I0501 04:02:59.219615       1 main.go:227] handling current node
	I0501 04:16:56.026553    4352 command_runner.go:130] ! I0501 04:02:59.219631       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026553    4352 command_runner.go:130] ! I0501 04:02:59.219638       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:02:59.220333       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:02:59.220436       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.231302       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.232437       1 main.go:227] handling current node
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.232648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.232851       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.233578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:09.233631       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.245975       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.246060       1 main.go:227] handling current node
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.246073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.026635    4352 command_runner.go:130] ! I0501 04:03:19.246081       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:19.246386       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:19.246423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.258941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259020       1 main.go:227] handling current node
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259485       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:29.259520       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.269941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270129       1 main.go:227] handling current node
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270152       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270161       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270403       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:39.270438       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.282880       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283025       1 main.go:227] handling current node
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283045       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283054       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283773       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:49.283792       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028216    4352 command_runner.go:130] ! I0501 04:03:59.297110       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297155       1 main.go:227] handling current node
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297169       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297177       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297656       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:03:59.297688       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:04:09.310638       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028762    4352 command_runner.go:130] ! I0501 04:04:09.311476       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.311969       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.312340       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.313291       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:09.313332       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.324939       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325084       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325480       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325493       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.325923       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:19.326083       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332468       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332576       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332619       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332645       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332818       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:29.332831       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342867       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342901       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.342921       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.343433       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:39.343593       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364771       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364905       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364921       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.364930       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.365166       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:49.365205       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379243       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379352       1 main.go:227] handling current node
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379369       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379377       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379531       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.028882    4352 command_runner.go:130] ! I0501 04:04:59.379564       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.389743       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390518       1 main.go:227] handling current node
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390622       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390636       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.390894       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:09.391049       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:19.400837       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029410    4352 command_runner.go:130] ! I0501 04:05:19.401285       1 main.go:227] handling current node
	I0501 04:16:56.029569    4352 command_runner.go:130] ! I0501 04:05:19.401439       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029594    4352 command_runner.go:130] ! I0501 04:05:19.401572       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.029594    4352 command_runner.go:130] ! I0501 04:05:19.401956       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:19.402136       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422040       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422249       1 main.go:227] handling current node
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422285       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422311       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.029667    4352 command_runner.go:130] ! I0501 04:05:29.422521       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.029827    4352 command_runner.go:130] ! I0501 04:05:29.422723       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.029849    4352 command_runner.go:130] ! I0501 04:05:39.429807       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.029941    4352 command_runner.go:130] ! I0501 04:05:39.429856       1 main.go:227] handling current node
	I0501 04:16:56.029996    4352 command_runner.go:130] ! I0501 04:05:39.429874       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.029996    4352 command_runner.go:130] ! I0501 04:05:39.429881       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030215    4352 command_runner.go:130] ! I0501 04:05:39.430903       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030215    4352 command_runner.go:130] ! I0501 04:05:39.431340       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030318    4352 command_runner.go:130] ! I0501 04:05:49.445455       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030318    4352 command_runner.go:130] ! I0501 04:05:49.445594       1 main.go:227] handling current node
	I0501 04:16:56.030365    4352 command_runner.go:130] ! I0501 04:05:49.445610       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:49.445619       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:49.445751       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:49.445765       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:59.461135       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030404    4352 command_runner.go:130] ! I0501 04:05:59.461248       1 main.go:227] handling current node
	I0501 04:16:56.030544    4352 command_runner.go:130] ! I0501 04:05:59.461264       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030544    4352 command_runner.go:130] ! I0501 04:05:59.461273       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030544    4352 command_runner.go:130] ! I0501 04:05:59.461947       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030614    4352 command_runner.go:130] ! I0501 04:05:59.462094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030614    4352 command_runner.go:130] ! I0501 04:06:09.469509       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030640    4352 command_runner.go:130] ! I0501 04:06:09.469615       1 main.go:227] handling current node
	I0501 04:16:56.030682    4352 command_runner.go:130] ! I0501 04:06:09.469636       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030682    4352 command_runner.go:130] ! I0501 04:06:09.469646       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030682    4352 command_runner.go:130] ! I0501 04:06:09.470218       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:09.470387       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486501       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486605       1 main.go:227] handling current node
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486621       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030734    4352 command_runner.go:130] ! I0501 04:06:19.486629       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:19.486864       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:19.486946       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:29.503311       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:29.503476       1 main.go:227] handling current node
	I0501 04:16:56.030795    4352 command_runner.go:130] ! I0501 04:06:29.503492       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:29.503503       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:29.503633       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:29.503843       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:39.528749       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:39.528837       1 main.go:227] handling current node
	I0501 04:16:56.030864    4352 command_runner.go:130] ! I0501 04:06:39.528853       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:39.528861       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:39.529235       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:39.529373       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:49.535984       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.030951    4352 command_runner.go:130] ! I0501 04:06:49.536067       1 main.go:227] handling current node
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536082       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536092       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536689       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:49.536802       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:59.550480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031029    4352 command_runner.go:130] ! I0501 04:06:59.551072       1 main.go:227] handling current node
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551257       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551358       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551696       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:06:59.551781       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031101    4352 command_runner.go:130] ! I0501 04:07:09.569460       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031187    4352 command_runner.go:130] ! I0501 04:07:09.569627       1 main.go:227] handling current node
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.569642       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.569651       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.570296       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:09.570434       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031248    4352 command_runner.go:130] ! I0501 04:07:19.577507       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031330    4352 command_runner.go:130] ! I0501 04:07:19.577599       1 main.go:227] handling current node
	I0501 04:16:56.031330    4352 command_runner.go:130] ! I0501 04:07:19.577615       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031330    4352 command_runner.go:130] ! I0501 04:07:19.577730       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031373    4352 command_runner.go:130] ! I0501 04:07:19.578102       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031373    4352 command_runner.go:130] ! I0501 04:07:19.578208       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592703       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592845       1 main.go:227] handling current node
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592861       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.592869       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.593139       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031422    4352 command_runner.go:130] ! I0501 04:07:29.593174       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031640    4352 command_runner.go:130] ! I0501 04:07:39.602034       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.031800    4352 command_runner.go:130] ! I0501 04:07:39.602064       1 main.go:227] handling current node
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602077       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602084       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602283       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:39.602300       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.031877    4352 command_runner.go:130] ! I0501 04:07:49.837563       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032093    4352 command_runner.go:130] ! I0501 04:07:49.837638       1 main.go:227] handling current node
	I0501 04:16:56.032179    4352 command_runner.go:130] ! I0501 04:07:49.837652       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032179    4352 command_runner.go:130] ! I0501 04:07:49.837660       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:49.837875       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:49.837955       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:59.851818       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:59.852109       1 main.go:227] handling current node
	I0501 04:16:56.032332    4352 command_runner.go:130] ! I0501 04:07:59.852127       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:07:59.852753       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:07:59.853129       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:07:59.853164       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:08:09.860338       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032408    4352 command_runner.go:130] ! I0501 04:08:09.860453       1 main.go:227] handling current node
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.860472       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.860482       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.860626       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:09.861316       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032475    4352 command_runner.go:130] ! I0501 04:08:19.877403       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877515       1 main.go:227] handling current node
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877530       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877838       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032542    4352 command_runner.go:130] ! I0501 04:08:19.877874       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892899       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892926       1 main.go:227] handling current node
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892937       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.892944       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032610    4352 command_runner.go:130] ! I0501 04:08:29.893106       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:29.893180       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:39.901877       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:39.901929       1 main.go:227] handling current node
	I0501 04:16:56.032695    4352 command_runner.go:130] ! I0501 04:08:39.901943       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:39.901951       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:39.902578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:39.902678       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:49.918941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032763    4352 command_runner.go:130] ! I0501 04:08:49.919115       1 main.go:227] handling current node
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919130       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919139       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919950       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032829    4352 command_runner.go:130] ! I0501 04:08:49.919968       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933101       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933154       1 main.go:227] handling current node
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.032906    4352 command_runner.go:130] ! I0501 04:08:59.933667       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:08:59.934094       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:08:59.934127       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:09:09.948569       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.032973    4352 command_runner.go:130] ! I0501 04:09:09.948615       1 main.go:227] handling current node
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.948629       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.948637       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.949057       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:09.949076       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033034    4352 command_runner.go:130] ! I0501 04:09:19.958099       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033096    4352 command_runner.go:130] ! I0501 04:09:19.958261       1 main.go:227] handling current node
	I0501 04:16:56.033096    4352 command_runner.go:130] ! I0501 04:09:19.958282       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033096    4352 command_runner.go:130] ! I0501 04:09:19.958294       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:19.958880       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:19.959055       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:29.975626       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033146    4352 command_runner.go:130] ! I0501 04:09:29.975765       1 main.go:227] handling current node
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.975790       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.975803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.976360       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:29.976488       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033201    4352 command_runner.go:130] ! I0501 04:09:39.985296       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.985455       1 main.go:227] handling current node
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.985488       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.985497       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.986552       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033267    4352 command_runner.go:130] ! I0501 04:09:39.986590       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.995944       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996021       1 main.go:227] handling current node
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996649       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:09:49.996720       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003190       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003239       1 main.go:227] handling current node
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003261       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033337    4352 command_runner.go:130] ! I0501 04:10:00.003479       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:00.003516       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023328       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023430       1 main.go:227] handling current node
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023445       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033483    4352 command_runner.go:130] ! I0501 04:10:10.023460       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:10.023613       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:10.023647       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030526       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030616       1 main.go:227] handling current node
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030632       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030641       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030856       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:20.030980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038164       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038263       1 main.go:227] handling current node
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038278       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038287       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033548    4352 command_runner.go:130] ! I0501 04:10:30.038931       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:30.039072       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:40.053866       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:40.053915       1 main.go:227] handling current node
	I0501 04:16:56.033684    4352 command_runner.go:130] ! I0501 04:10:40.053929       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:40.053936       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:40.054259       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:40.054295       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066490       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066542       1 main.go:227] handling current node
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066560       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.066567       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.067066       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:10:50.067210       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:11:00.075901       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033757    4352 command_runner.go:130] ! I0501 04:11:00.076052       1 main.go:227] handling current node
	I0501 04:16:56.033914    4352 command_runner.go:130] ! I0501 04:11:00.076069       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033914    4352 command_runner.go:130] ! I0501 04:11:00.076078       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033956    4352 command_runner.go:130] ! I0501 04:11:10.087907       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.033956    4352 command_runner.go:130] ! I0501 04:11:10.088124       1 main.go:227] handling current node
	I0501 04:16:56.033956    4352 command_runner.go:130] ! I0501 04:11:10.088140       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.033997    4352 command_runner.go:130] ! I0501 04:11:10.088148       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.033997    4352 command_runner.go:130] ! I0501 04:11:10.088875       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.033997    4352 command_runner.go:130] ! I0501 04:11:10.088954       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:10.089178       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103399       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103511       1 main.go:227] handling current node
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103528       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103879       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:20.103916       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:30.114473       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034047    4352 command_runner.go:130] ! I0501 04:11:30.115083       1 main.go:227] handling current node
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.115256       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.115463       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.116474       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:30.116611       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034174    4352 command_runner.go:130] ! I0501 04:11:40.124324       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124371       1 main.go:227] handling current node
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124384       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124392       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124558       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034250    4352 command_runner.go:130] ! I0501 04:11:40.124570       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034317    4352 command_runner.go:130] ! I0501 04:11:50.138059       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034317    4352 command_runner.go:130] ! I0501 04:11:50.138102       1 main.go:227] handling current node
	I0501 04:16:56.034317    4352 command_runner.go:130] ! I0501 04:11:50.138116       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:11:50.138123       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:11:50.138826       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:11:50.138936       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034384    4352 command_runner.go:130] ! I0501 04:12:00.155704       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034445    4352 command_runner.go:130] ! I0501 04:12:00.155799       1 main.go:227] handling current node
	I0501 04:16:56.034445    4352 command_runner.go:130] ! I0501 04:12:00.155823       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034445    4352 command_runner.go:130] ! I0501 04:12:00.155832       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:00.156502       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:00.156549       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164706       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164754       1 main.go:227] handling current node
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164767       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034496    4352 command_runner.go:130] ! I0501 04:12:10.164774       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034564    4352 command_runner.go:130] ! I0501 04:12:10.164887       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034564    4352 command_runner.go:130] ! I0501 04:12:10.165094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034564    4352 command_runner.go:130] ! I0501 04:12:20.178957       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034625    4352 command_runner.go:130] ! I0501 04:12:20.179142       1 main.go:227] handling current node
	I0501 04:16:56.034625    4352 command_runner.go:130] ! I0501 04:12:20.179159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034625    4352 command_runner.go:130] ! I0501 04:12:20.179178       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034714    4352 command_runner.go:130] ! I0501 04:12:20.179694       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034762    4352 command_runner.go:130] ! I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034762    4352 command_runner.go:130] ! I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034762    4352 command_runner.go:130] ! I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:16:56.034804    4352 command_runner.go:130] ! I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034804    4352 command_runner.go:130] ! I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034804    4352 command_runner.go:130] ! I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034858    4352 command_runner.go:130] ! I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034858    4352 command_runner.go:130] ! I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034858    4352 command_runner.go:130] ! I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.034900    4352 command_runner.go:130] ! I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.034954    4352 command_runner.go:130] ! I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.035005    4352 command_runner.go:130] ! I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.035005    4352 command_runner.go:130] ! I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:16:56.035005    4352 command_runner.go:130] ! I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:56.035040    4352 command_runner.go:130] ! I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:56.057018    4352 logs.go:123] Gathering logs for describe nodes ...
	I0501 04:16:56.057018    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 04:16:56.308494    4352 command_runner.go:130] > Name:               multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] > Roles:              control-plane
	I0501 04:16:56.308494    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0501 04:16:56.308494    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:56.308494    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:56.308494    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	I0501 04:16:56.308494    4352 command_runner.go:130] > Taints:             <none>
	I0501 04:16:56.308494    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:56.308494    4352 command_runner.go:130] > Lease:
	I0501 04:16:56.308494    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:56.308494    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:16:53 +0000
	I0501 04:16:56.308494    4352 command_runner.go:130] > Conditions:
	I0501 04:16:56.308494    4352 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0501 04:16:56.308494    4352 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0501 04:16:56.308494    4352 command_runner.go:130] >   MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0501 04:16:56.308494    4352 command_runner.go:130] >   DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0501 04:16:56.308494    4352 command_runner.go:130] >   PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0501 04:16:56.308494    4352 command_runner.go:130] >   Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	I0501 04:16:56.308494    4352 command_runner.go:130] > Addresses:
	I0501 04:16:56.308494    4352 command_runner.go:130] >   InternalIP:  172.28.209.199
	I0501 04:16:56.308494    4352 command_runner.go:130] >   Hostname:    multinode-289800
	I0501 04:16:56.308494    4352 command_runner.go:130] > Capacity:
	I0501 04:16:56.309052    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.309052    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.309052    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.309100    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.309100    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.309100    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:56.309100    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.309100    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.309158    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.309158    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.309158    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.309158    4352 command_runner.go:130] > System Info:
	I0501 04:16:56.309200    4352 command_runner.go:130] >   Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	I0501 04:16:56.309230    4352 command_runner.go:130] >   System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	I0501 04:16:56.309245    4352 command_runner.go:130] >   Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	I0501 04:16:56.309245    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:56.309271    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:56.309271    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:56.309300    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:56.309347    4352 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0501 04:16:56.309347    4352 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0501 04:16:56.309347    4352 command_runner.go:130] > Non-terminated Pods:          (10 in total)
	I0501 04:16:56.309403    4352 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:56.309403    4352 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0501 04:16:56.309445    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-cc6mk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0501 04:16:56.309445    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0501 04:16:56.309484    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0501 04:16:56.309525    4352 command_runner.go:130] >   kube-system                 etcd-multinode-289800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         74s
	I0501 04:16:56.309525    4352 command_runner.go:130] >   kube-system                 kindnet-vcxkr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0501 04:16:56.309564    4352 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-289800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	I0501 04:16:56.309605    4352 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-289800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:56.309605    4352 command_runner.go:130] >   kube-system                 kube-proxy-bp9zx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:56.309643    4352 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-289800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:56.309643    4352 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:56.309643    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:56.309686    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:56.309686    4352 command_runner.go:130] >   Resource           Requests     Limits
	I0501 04:16:56.309686    4352 command_runner.go:130] >   --------           --------     ------
	I0501 04:16:56.309725    4352 command_runner.go:130] >   cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	I0501 04:16:56.309725    4352 command_runner.go:130] >   memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	I0501 04:16:56.309725    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0501 04:16:56.309725    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0501 04:16:56.309725    4352 command_runner.go:130] > Events:
	I0501 04:16:56.309775    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:56.309775    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:56.309815    4352 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0501 04:16:56.309815    4352 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I0501 04:16:56.309815    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:56.309856    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:56.309856    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.309895    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:56.309895    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.309936    4352 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0501 04:16:56.309936    4352 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:56.309974    4352 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-289800 status is now: NodeReady
	I0501 04:16:56.309974    4352 command_runner.go:130] >   Normal  Starting                 80s                kubelet          Starting kubelet.
	I0501 04:16:56.309974    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  79s (x8 over 80s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:56.310024    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    79s (x8 over 80s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.310083    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 80s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:56.310083    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.310083    4352 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:56.310127    4352 command_runner.go:130] > Name:               multinode-289800-m02
	I0501 04:16:56.310127    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:56.310127    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:56.310127    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:56.310168    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:56.310168    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m02
	I0501 04:16:56.310212    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:56.310212    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:56.310253    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:56.310253    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:56.310253    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	I0501 04:16:56.310297    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:56.310297    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:56.310339    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:56.310339    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:56.310339    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	I0501 04:16:56.310396    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:56.310396    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:56.310437    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:56.310437    4352 command_runner.go:130] > Lease:
	I0501 04:16:56.310437    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m02
	I0501 04:16:56.310480    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:56.310480    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	I0501 04:16:56.310480    4352 command_runner.go:130] > Conditions:
	I0501 04:16:56.310480    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:56.310520    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:56.310520    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310571    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310604    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310604    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.310648    4352 command_runner.go:130] > Addresses:
	I0501 04:16:56.310648    4352 command_runner.go:130] >   InternalIP:  172.28.219.162
	I0501 04:16:56.310648    4352 command_runner.go:130] >   Hostname:    multinode-289800-m02
	I0501 04:16:56.310648    4352 command_runner.go:130] > Capacity:
	I0501 04:16:56.310688    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.310688    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.310688    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.310688    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.310688    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.310740    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:56.310740    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.310740    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.310788    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.310788    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.310816    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.310816    4352 command_runner.go:130] > System Info:
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	I0501 04:16:56.310816    4352 command_runner.go:130] >   System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:56.310816    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:56.310816    4352 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0501 04:16:56.310816    4352 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0501 04:16:56.310816    4352 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:56.310816    4352 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0501 04:16:56.310816    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-tbxxx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0501 04:16:56.310816    4352 command_runner.go:130] >   kube-system                 kindnet-gzz7p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0501 04:16:56.310816    4352 command_runner.go:130] >   kube-system                 kube-proxy-rlzp8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0501 04:16:56.310816    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:56.310816    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:56.310816    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:56.310816    4352 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:56.310816    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:56.310816    4352 command_runner.go:130] > Events:
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:56.310816    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:56.310816    4352 command_runner.go:130] >   Normal  NodeNotReady             21s                node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	I0501 04:16:56.310816    4352 command_runner.go:130] > Name:               multinode-289800-m03
	I0501 04:16:56.310816    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:56.310816    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m03
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:56.310816    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:56.311393    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:56.311393    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	I0501 04:16:56.311393    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:56.311443    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:56.311443    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:56.311443    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:56.311482    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	I0501 04:16:56.311482    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:56.311482    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:56.311515    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:56.311515    4352 command_runner.go:130] > Lease:
	I0501 04:16:56.311515    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m03
	I0501 04:16:56.311515    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:56.311568    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	I0501 04:16:56.311568    4352 command_runner.go:130] > Conditions:
	I0501 04:16:56.311568    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:56.311610    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:56.311610    4352 command_runner.go:130] > Addresses:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   InternalIP:  172.28.223.145
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Hostname:    multinode-289800-m03
	I0501 04:16:56.311610    4352 command_runner.go:130] > Capacity:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.311610    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.311610    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:56.311610    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:56.311610    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:56.311610    4352 command_runner.go:130] > System Info:
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	I0501 04:16:56.311610    4352 command_runner.go:130] >   System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:56.311610    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:56.311610    4352 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0501 04:16:56.311610    4352 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0501 04:16:56.311610    4352 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0501 04:16:56.311610    4352 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:56.311610    4352 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0501 04:16:56.312200    4352 command_runner.go:130] >   kube-system                 kindnet-4m5vg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0501 04:16:56.312200    4352 command_runner.go:130] >   kube-system                 kube-proxy-g8mbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0501 04:16:56.312251    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:56.312251    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:56.312251    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:56.312251    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:56.312251    4352 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0501 04:16:56.312251    4352 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0501 04:16:56.312251    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:56.312329    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:56.312329    4352 command_runner.go:130] > Events:
	I0501 04:16:56.312366    4352 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0501 04:16:56.312389    4352 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  Starting                 5m48s                  kube-proxy       
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  RegisteredNode           5m47s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeReady                5m45s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  NodeNotReady             4m7s                   node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	I0501 04:16:56.312389    4352 command_runner.go:130] >   Normal  RegisteredNode           61s                    node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:56.322942    4352 logs.go:123] Gathering logs for coredns [b8a9b405d76b] ...
	I0501 04:16:56.322942    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a9b405d76b"
	I0501 04:16:56.376703    4352 command_runner.go:130] > .:53
	I0501 04:16:56.376766    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:56.376766    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:56.376766    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:56.376828    4352 command_runner.go:130] > [INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	I0501 04:16:56.377071    4352 logs.go:123] Gathering logs for coredns [8a0208aeafcf] ...
	I0501 04:16:56.377165    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0208aeafcf"
	I0501 04:16:56.416710    4352 command_runner.go:130] > .:53
	I0501 04:16:56.416754    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:56.416754    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:56.416754    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:56.416754    4352 command_runner.go:130] > [INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	I0501 04:16:56.417168    4352 logs.go:123] Gathering logs for kube-controller-manager [66a1b89e6733] ...
	I0501 04:16:56.417351    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1b89e6733"
	I0501 04:16:56.455218    4352 command_runner.go:130] ! I0501 04:15:39.740014       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:56.455874    4352 command_runner.go:130] ! I0501 04:15:40.254324       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:56.455874    4352 command_runner.go:130] ! I0501 04:15:40.254368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.263842       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.264273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.265102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:56.456011    4352 command_runner.go:130] ! I0501 04:15:40.265435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:56.456134    4352 command_runner.go:130] ! I0501 04:15:44.420436       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:56.456134    4352 command_runner.go:130] ! I0501 04:15:44.421597       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:56.456196    4352 command_runner.go:130] ! I0501 04:15:44.430683       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:56.456196    4352 command_runner.go:130] ! I0501 04:15:44.430949       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.431056       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.437281       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.440408       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:56.456301    4352 command_runner.go:130] ! I0501 04:15:44.437711       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:56.456486    4352 command_runner.go:130] ! I0501 04:15:44.440933       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:56.456547    4352 command_runner.go:130] ! I0501 04:15:44.450877       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:56.456547    4352 command_runner.go:130] ! I0501 04:15:44.452935       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:56.456642    4352 command_runner.go:130] ! I0501 04:15:44.452958       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.458231       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.458525       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.458548       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.467611       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:56.456739    4352 command_runner.go:130] ! I0501 04:15:44.468036       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.468093       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.468107       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.484825       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:56.456876    4352 command_runner.go:130] ! I0501 04:15:44.484856       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:56.457012    4352 command_runner.go:130] ! I0501 04:15:44.484892       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457012    4352 command_runner.go:130] ! I0501 04:15:44.485128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:56.457012    4352 command_runner.go:130] ! I0501 04:15:44.485186       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:56.457134    4352 command_runner.go:130] ! I0501 04:15:44.485221       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:56.457134    4352 command_runner.go:130] ! I0501 04:15:44.485229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:56.457250    4352 command_runner.go:130] ! I0501 04:15:44.485246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457250    4352 command_runner.go:130] ! I0501 04:15:44.485322       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457250    4352 command_runner.go:130] ! I0501 04:15:44.488601       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:56.457369    4352 command_runner.go:130] ! I0501 04:15:44.488943       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:56.457439    4352 command_runner.go:130] ! I0501 04:15:44.488958       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:56.457477    4352 command_runner.go:130] ! I0501 04:15:44.488985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:56.457571    4352 command_runner.go:130] ! I0501 04:15:44.523143       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:56.457611    4352 command_runner.go:130] ! I0501 04:15:44.644894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:56.457753    4352 command_runner.go:130] ! I0501 04:15:44.645016       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:56.457753    4352 command_runner.go:130] ! I0501 04:15:44.645088       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:56.457854    4352 command_runner.go:130] ! I0501 04:15:44.645112       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:56.457854    4352 command_runner.go:130] ! I0501 04:15:44.646888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:56.457915    4352 command_runner.go:130] ! W0501 04:15:44.646984       1 shared_informer.go:597] resyncPeriod 15h44m19.234758052s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:56.458000    4352 command_runner.go:130] ! W0501 04:15:44.647035       1 shared_informer.go:597] resyncPeriod 17h52m42.538614251s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:56.458059    4352 command_runner.go:130] ! I0501 04:15:44.647224       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:56.458059    4352 command_runner.go:130] ! I0501 04:15:44.647325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:56.458132    4352 command_runner.go:130] ! I0501 04:15:44.647389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:56.458211    4352 command_runner.go:130] ! I0501 04:15:44.647418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:56.458211    4352 command_runner.go:130] ! I0501 04:15:44.647559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:56.458312    4352 command_runner.go:130] ! I0501 04:15:44.647580       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:56.458312    4352 command_runner.go:130] ! I0501 04:15:44.648269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:56.458449    4352 command_runner.go:130] ! I0501 04:15:44.648364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:56.458449    4352 command_runner.go:130] ! I0501 04:15:44.648387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:56.458584    4352 command_runner.go:130] ! I0501 04:15:44.648418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:56.458674    4352 command_runner.go:130] ! I0501 04:15:44.648519       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:56.458712    4352 command_runner.go:130] ! I0501 04:15:44.648561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:56.458712    4352 command_runner.go:130] ! I0501 04:15:44.648582       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:56.458712    4352 command_runner.go:130] ! I0501 04:15:44.648601       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.648633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.648662       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.649971       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:56.458823    4352 command_runner.go:130] ! I0501 04:15:44.649999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.650094       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.658545       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.664070       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:56.458957    4352 command_runner.go:130] ! I0501 04:15:44.664109       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:56.459072    4352 command_runner.go:130] ! I0501 04:15:44.672333       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:56.459072    4352 command_runner.go:130] ! I0501 04:15:44.672648       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:56.459072    4352 command_runner.go:130] ! I0501 04:15:44.673224       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:56.459072    4352 command_runner.go:130] ! E0501 04:15:44.680086       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:56.459232    4352 command_runner.go:130] ! I0501 04:15:44.680207       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:56.459232    4352 command_runner.go:130] ! I0501 04:15:44.686271       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:56.459232    4352 command_runner.go:130] ! I0501 04:15:44.687804       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.688087       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.691064       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.694139       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:56.459380    4352 command_runner.go:130] ! I0501 04:15:44.694154       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:56.459496    4352 command_runner.go:130] ! I0501 04:15:44.697309       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:56.459496    4352 command_runner.go:130] ! I0501 04:15:44.697808       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:56.459496    4352 command_runner.go:130] ! I0501 04:15:44.698725       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:56.459609    4352 command_runner.go:130] ! I0501 04:15:44.709020       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:56.459609    4352 command_runner.go:130] ! I0501 04:15:44.709557       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:56.459609    4352 command_runner.go:130] ! I0501 04:15:44.718572       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:56.459724    4352 command_runner.go:130] ! I0501 04:15:44.718866       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:56.459724    4352 command_runner.go:130] ! I0501 04:15:44.731386       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:56.459724    4352 command_runner.go:130] ! I0501 04:15:44.731502       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:56.459830    4352 command_runner.go:130] ! I0501 04:15:44.731520       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:56.459830    4352 command_runner.go:130] ! I0501 04:15:44.731794       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.732008       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.732024       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.732060       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.739601       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:56.459889    4352 command_runner.go:130] ! I0501 04:15:44.741937       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:56.460043    4352 command_runner.go:130] ! I0501 04:15:44.742091       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:56.460043    4352 command_runner.go:130] ! I0501 04:15:44.751335       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:56.460043    4352 command_runner.go:130] ! I0501 04:15:44.758177       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.767021       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.776399       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.777830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:56.460161    4352 command_runner.go:130] ! I0501 04:15:44.780031       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:56.460285    4352 command_runner.go:130] ! I0501 04:15:44.783346       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:56.460285    4352 command_runner.go:130] ! I0501 04:15:44.784386       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:56.460285    4352 command_runner.go:130] ! I0501 04:15:44.784668       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.790586       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.791028       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.791148       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:56.460410    4352 command_runner.go:130] ! I0501 04:15:44.795072       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:56.460523    4352 command_runner.go:130] ! I0501 04:15:44.795486       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:56.460523    4352 command_runner.go:130] ! I0501 04:15:44.796321       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:56.460523    4352 command_runner.go:130] ! I0501 04:15:44.806964       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:56.460631    4352 command_runner.go:130] ! I0501 04:15:44.807399       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:56.460631    4352 command_runner.go:130] ! I0501 04:15:44.808302       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:56.460631    4352 command_runner.go:130] ! I0501 04:15:44.810677       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:56.460742    4352 command_runner.go:130] ! I0501 04:15:44.811276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:56.460742    4352 command_runner.go:130] ! I0501 04:15:44.812128       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:56.460742    4352 command_runner.go:130] ! I0501 04:15:44.814338       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:56.460856    4352 command_runner.go:130] ! I0501 04:15:44.814699       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:56.460856    4352 command_runner.go:130] ! I0501 04:15:44.815465       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:56.460856    4352 command_runner.go:130] ! I0501 04:15:44.818437       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:56.460969    4352 command_runner.go:130] ! I0501 04:15:44.819004       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:56.460969    4352 command_runner.go:130] ! I0501 04:15:44.818976       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:56.460969    4352 command_runner.go:130] ! I0501 04:15:44.820305       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:56.461073    4352 command_runner.go:130] ! I0501 04:15:44.820518       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:56.461073    4352 command_runner.go:130] ! I0501 04:15:44.822359       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:56.461073    4352 command_runner.go:130] ! I0501 04:15:44.824878       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.825167       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.835687       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.835705       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:56.461184    4352 command_runner.go:130] ! I0501 04:15:44.835739       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:56.461300    4352 command_runner.go:130] ! I0501 04:15:44.836623       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:56.461300    4352 command_runner.go:130] ! E0501 04:15:44.845522       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:56.461300    4352 command_runner.go:130] ! I0501 04:15:44.845590       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:56.461420    4352 command_runner.go:130] ! I0501 04:15:44.975590       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:56.461420    4352 command_runner.go:130] ! I0501 04:15:44.975737       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:56.461420    4352 command_runner.go:130] ! I0501 04:15:45.026863       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:56.461524    4352 command_runner.go:130] ! I0501 04:15:45.026966       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:56.461524    4352 command_runner.go:130] ! I0501 04:15:45.026980       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:56.461524    4352 command_runner.go:130] ! I0501 04:15:45.188029       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.191154       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.191606       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.234916       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.235592       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.235855       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:56.461632    4352 command_runner.go:130] ! I0501 04:15:45.275946       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:56.462566    4352 command_runner.go:130] ! I0501 04:15:45.276219       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:56.462641    4352 command_runner.go:130] ! I0501 04:15:45.277151       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:56.462672    4352 command_runner.go:130] ! I0501 04:15:45.277668       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:56.462723    4352 command_runner.go:130] ! I0501 04:15:55.347039       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:56.462798    4352 command_runner.go:130] ! I0501 04:15:55.347226       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:56.462798    4352 command_runner.go:130] ! I0501 04:15:55.347657       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:56.462838    4352 command_runner.go:130] ! I0501 04:15:55.347697       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:56.462838    4352 command_runner.go:130] ! I0501 04:15:55.351170       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:56.462934    4352 command_runner.go:130] ! I0501 04:15:55.351453       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:56.463169    4352 command_runner.go:130] ! I0501 04:15:55.351701       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:56.463230    4352 command_runner.go:130] ! I0501 04:15:55.352658       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:56.463230    4352 command_runner.go:130] ! I0501 04:15:55.355868       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:56.463230    4352 command_runner.go:130] ! I0501 04:15:55.356195       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.356581       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.373530       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.375966       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.376087       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.376099       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.381581       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.387752       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.398512       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.398855       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.433745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.433841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.434861       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.437855       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438314       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438445       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.438531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.441880       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.442281       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448289       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448378       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448532       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.448615       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.452662       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.453060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.453136       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.459094       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.465378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.468998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:56.463340    4352 command_runner.go:130] ! I0501 04:15:55.476103       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.479405       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.480400       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.485347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.485423       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.485459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:56.463888    4352 command_runner.go:130] ! I0501 04:15:55.488987       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.489270       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.492066       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.492447       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.494972       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.497059       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.499153       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:56.464399    4352 command_runner.go:130] ! I0501 04:15:55.499594       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:56.464553    4352 command_runner.go:130] ! I0501 04:15:55.509506       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:56.464608    4352 command_runner.go:130] ! I0501 04:15:55.513444       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:56.464608    4352 command_runner.go:130] ! I0501 04:15:55.517356       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:56.464608    4352 command_runner.go:130] ! I0501 04:15:55.519269       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:56.464667    4352 command_runner.go:130] ! I0501 04:15:55.521379       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:56.464718    4352 command_runner.go:130] ! I0501 04:15:55.527109       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:56.464771    4352 command_runner.go:130] ! I0501 04:15:55.533712       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:56.464821    4352 command_runner.go:130] ! I0501 04:15:55.534052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:56.464884    4352 command_runner.go:130] ! I0501 04:15:55.562220       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:56.464884    4352 command_runner.go:130] ! I0501 04:15:55.562294       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:56.465020    4352 command_runner.go:130] ! I0501 04:15:55.562374       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:56.465081    4352 command_runner.go:130] ! I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:16:56.465122    4352 command_runner.go:130] ! I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:56.465122    4352 command_runner.go:130] ! I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:56.465183    4352 command_runner.go:130] ! I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:16:56.465240    4352 command_runner.go:130] ! I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:16:56.465300    4352 command_runner.go:130] ! I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:16:56.465371    4352 command_runner.go:130] ! I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:16:56.465371    4352 command_runner.go:130] ! I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:56.465428    4352 command_runner.go:130] ! I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:56.465481    4352 command_runner.go:130] ! I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:56.465537    4352 command_runner.go:130] ! I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:56.465592    4352 command_runner.go:130] ! I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:56.465592    4352 command_runner.go:130] ! I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:56.465651    4352 command_runner.go:130] ! I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:56.465704    4352 command_runner.go:130] ! I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:56.465704    4352 command_runner.go:130] ! I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:56.465746    4352 command_runner.go:130] ! I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:56.465861    4352 command_runner.go:130] ! I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:56.465968    4352 command_runner.go:130] ! I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:56.465968    4352 command_runner.go:130] ! I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:56.465968    4352 command_runner.go:130] ! I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:56.466069    4352 command_runner.go:130] ! I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:56.466069    4352 command_runner.go:130] ! I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:56.466107    4352 command_runner.go:130] ! I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:56.466150    4352 command_runner.go:130] ! I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	I0501 04:16:56.484885    4352 logs.go:123] Gathering logs for dmesg ...
	I0501 04:16:56.484885    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 04:16:56.513601    4352 command_runner.go:130] > [May 1 04:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0501 04:16:56.513601    4352 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0501 04:16:56.513601    4352 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0501 04:16:56.513601    4352 command_runner.go:130] > [  +0.128235] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0501 04:16:56.513747    4352 command_runner.go:130] > [  +0.023886] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0501 04:16:56.513819    4352 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0501 04:16:56.513875    4352 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0501 04:16:56.513875    4352 command_runner.go:130] > [  +0.057986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0501 04:16:56.513948    4352 command_runner.go:130] > [  +0.022012] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0501 04:16:56.513948    4352 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0501 04:16:56.513948    4352 command_runner.go:130] > [  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0501 04:16:56.513948    4352 command_runner.go:130] > [May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0501 04:16:56.514138    4352 command_runner.go:130] > [ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0501 04:16:56.514138    4352 command_runner.go:130] > [  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0501 04:16:56.514138    4352 command_runner.go:130] > [May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	I0501 04:16:56.514232    4352 command_runner.go:130] > [  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	I0501 04:16:56.514232    4352 command_runner.go:130] > [  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	I0501 04:16:56.514232    4352 command_runner.go:130] > [  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	I0501 04:16:56.514270    4352 command_runner.go:130] > [  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	I0501 04:16:56.514270    4352 command_runner.go:130] > [  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	I0501 04:16:56.514303    4352 command_runner.go:130] > [  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	I0501 04:16:59.025267    4352 api_server.go:253] Checking apiserver healthz at https://172.28.209.199:8443/healthz ...
	I0501 04:16:59.035373    4352 api_server.go:279] https://172.28.209.199:8443/healthz returned 200:
	ok
	I0501 04:16:59.035721    4352 round_trippers.go:463] GET https://172.28.209.199:8443/version
	I0501 04:16:59.035800    4352 round_trippers.go:469] Request Headers:
	I0501 04:16:59.035800    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:16:59.035844    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:16:59.037152    4352 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0501 04:16:59.037152    4352 round_trippers.go:577] Response Headers:
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:16:59.037152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:16:59.037152    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Content-Length: 263
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:16:59 GMT
	I0501 04:16:59.037152    4352 round_trippers.go:580]     Audit-Id: 2404fd61-6bc6-467d-a785-d44e96b27036
	I0501 04:16:59.037152    4352 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0501 04:16:59.037152    4352 api_server.go:141] control plane version: v1.30.0
	I0501 04:16:59.037152    4352 api_server.go:131] duration metric: took 4.0329758s to wait for apiserver health ...
	I0501 04:16:59.037152    4352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 04:16:59.049812    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0501 04:16:59.079918    4352 command_runner.go:130] > 18cd30f3ad28
	I0501 04:16:59.080370    4352 logs.go:276] 1 containers: [18cd30f3ad28]
	I0501 04:16:59.091264    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0501 04:16:59.121244    4352 command_runner.go:130] > 34892fdb6898
	I0501 04:16:59.121244    4352 logs.go:276] 1 containers: [34892fdb6898]
	I0501 04:16:59.131230    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0501 04:16:59.164227    4352 command_runner.go:130] > b8a9b405d76b
	I0501 04:16:59.164227    4352 command_runner.go:130] > 8a0208aeafcf
	I0501 04:16:59.164227    4352 command_runner.go:130] > 15c4496e3a9f
	I0501 04:16:59.164227    4352 command_runner.go:130] > 3e8d5ff9a9e4
	I0501 04:16:59.164818    4352 logs.go:276] 4 containers: [b8a9b405d76b 8a0208aeafcf 15c4496e3a9f 3e8d5ff9a9e4]
	I0501 04:16:59.175998    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0501 04:16:59.206001    4352 command_runner.go:130] > eaf69fce5ee3
	I0501 04:16:59.206001    4352 command_runner.go:130] > 06f1f84bfde1
	I0501 04:16:59.210788    4352 logs.go:276] 2 containers: [eaf69fce5ee3 06f1f84bfde1]
	I0501 04:16:59.221911    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0501 04:16:59.260743    4352 command_runner.go:130] > 3efcc92f817e
	I0501 04:16:59.260743    4352 command_runner.go:130] > 502684407b0c
	I0501 04:16:59.260743    4352 logs.go:276] 2 containers: [3efcc92f817e 502684407b0c]
	I0501 04:16:59.270752    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0501 04:16:59.296707    4352 command_runner.go:130] > 66a1b89e6733
	I0501 04:16:59.296707    4352 command_runner.go:130] > 4b62556f40be
	I0501 04:16:59.298599    4352 logs.go:276] 2 containers: [66a1b89e6733 4b62556f40be]
	I0501 04:16:59.309612    4352 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0501 04:16:59.334619    4352 command_runner.go:130] > b7cae3f6b88b
	I0501 04:16:59.335632    4352 command_runner.go:130] > 6d5f881ef398
	I0501 04:16:59.335632    4352 logs.go:276] 2 containers: [b7cae3f6b88b 6d5f881ef398]
	I0501 04:16:59.335701    4352 logs.go:123] Gathering logs for dmesg ...
	I0501 04:16:59.335873    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0501 04:16:59.362549    4352 command_runner.go:130] > [May 1 04:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.128235] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0501 04:16:59.362549    4352 command_runner.go:130] > [  +0.023886] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.057986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0501 04:16:59.363091    4352 command_runner.go:130] > [  +0.022012] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0501 04:16:59.363192    4352 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0501 04:16:59.363192    4352 command_runner.go:130] > [  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0501 04:16:59.363192    4352 command_runner.go:130] > [May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0501 04:16:59.363192    4352 command_runner.go:130] > [  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0501 04:16:59.363192    4352 command_runner.go:130] > [  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0501 04:16:59.363263    4352 command_runner.go:130] > [ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0501 04:16:59.363263    4352 command_runner.go:130] > [May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	I0501 04:16:59.363263    4352 command_runner.go:130] > [  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	I0501 04:16:59.363366    4352 command_runner.go:130] > [  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	I0501 04:16:59.363445    4352 command_runner.go:130] > [  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	I0501 04:16:59.363508    4352 command_runner.go:130] > [  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	I0501 04:16:59.363508    4352 command_runner.go:130] > [  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	I0501 04:16:59.365198    4352 logs.go:123] Gathering logs for describe nodes ...
	I0501 04:16:59.365198    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0501 04:16:59.583417    4352 command_runner.go:130] > Name:               multinode-289800
	I0501 04:16:59.583466    4352 command_runner.go:130] > Roles:              control-plane
	I0501 04:16:59.583466    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800
	I0501 04:16:59.583586    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:59.583642    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:59.583642    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:59.583642    4352 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0501 04:16:59.583693    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	I0501 04:16:59.583693    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:59.583693    4352 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0501 04:16:59.583772    4352 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0501 04:16:59.583772    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:59.583772    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:59.583772    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:59.583825    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	I0501 04:16:59.583825    4352 command_runner.go:130] > Taints:             <none>
	I0501 04:16:59.583825    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:59.583825    4352 command_runner.go:130] > Lease:
	I0501 04:16:59.583825    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800
	I0501 04:16:59.583825    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:59.583887    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:16:53 +0000
	I0501 04:16:59.583887    4352 command_runner.go:130] > Conditions:
	I0501 04:16:59.583887    4352 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0501 04:16:59.583887    4352 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0501 04:16:59.583977    4352 command_runner.go:130] >   MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0501 04:16:59.583977    4352 command_runner.go:130] >   DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0501 04:16:59.584008    4352 command_runner.go:130] >   PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0501 04:16:59.584008    4352 command_runner.go:130] >   Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	I0501 04:16:59.584008    4352 command_runner.go:130] > Addresses:
	I0501 04:16:59.584008    4352 command_runner.go:130] >   InternalIP:  172.28.209.199
	I0501 04:16:59.584008    4352 command_runner.go:130] >   Hostname:    multinode-289800
	I0501 04:16:59.584101    4352 command_runner.go:130] > Capacity:
	I0501 04:16:59.584101    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.584101    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.584101    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.584101    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.584151    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.584151    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:59.584151    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.584151    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.584151    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.584195    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.584195    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.584195    4352 command_runner.go:130] > System Info:
	I0501 04:16:59.584195    4352 command_runner.go:130] >   Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	I0501 04:16:59.584195    4352 command_runner.go:130] >   System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	I0501 04:16:59.584246    4352 command_runner.go:130] >   Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	I0501 04:16:59.584246    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:59.584246    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:59.584246    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:59.584311    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:59.584311    4352 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0501 04:16:59.584311    4352 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0501 04:16:59.584370    4352 command_runner.go:130] > Non-terminated Pods:          (10 in total)
	I0501 04:16:59.584370    4352 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:59.584415    4352 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0501 04:16:59.584415    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-cc6mk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0501 04:16:59.584415    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0501 04:16:59.584470    4352 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 etcd-multinode-289800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         77s
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kindnet-vcxkr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-289800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-289800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-proxy-bp9zx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-289800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	I0501 04:16:59.584568    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:59.584568    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Resource           Requests     Limits
	I0501 04:16:59.584568    4352 command_runner.go:130] >   --------           --------     ------
	I0501 04:16:59.584568    4352 command_runner.go:130] >   cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0501 04:16:59.584568    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0501 04:16:59.584568    4352 command_runner.go:130] > Events:
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:59.584568    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  Starting                 74s                kube-proxy       
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.584568    4352 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I0501 04:16:59.585120    4352 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:59.585120    4352 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-289800 status is now: NodeReady
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 83s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.585188    4352 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	I0501 04:16:59.585188    4352 command_runner.go:130] > Name:               multinode-289800-m02
	I0501 04:16:59.585188    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:59.585188    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:59.585188    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:59.585188    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:59.585188    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m02
	I0501 04:16:59.585349    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:59.585349    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:59.585415    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:59.585415    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:59.585459    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	I0501 04:16:59.585459    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:59.585459    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:59.585518    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:59.585518    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:59.585518    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	I0501 04:16:59.585573    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:59.585573    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:59.585637    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:59.585688    4352 command_runner.go:130] > Lease:
	I0501 04:16:59.585688    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m02
	I0501 04:16:59.585688    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:59.585688    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	I0501 04:16:59.585688    4352 command_runner.go:130] > Conditions:
	I0501 04:16:59.585688    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:59.585795    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:59.585795    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.585795    4352 command_runner.go:130] > Addresses:
	I0501 04:16:59.585897    4352 command_runner.go:130] >   InternalIP:  172.28.219.162
	I0501 04:16:59.585897    4352 command_runner.go:130] >   Hostname:    multinode-289800-m02
	I0501 04:16:59.585897    4352 command_runner.go:130] > Capacity:
	I0501 04:16:59.585897    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.585897    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.585897    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.585897    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.585955    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.585955    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:59.585955    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.585955    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.586010    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.586010    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.586010    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.586049    4352 command_runner.go:130] > System Info:
	I0501 04:16:59.586049    4352 command_runner.go:130] >   Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	I0501 04:16:59.586069    4352 command_runner.go:130] >   System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	I0501 04:16:59.586069    4352 command_runner.go:130] >   Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	I0501 04:16:59.586069    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:59.586115    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:59.586115    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:59.586115    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:59.586156    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:59.586156    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:59.586156    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:59.586156    4352 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0501 04:16:59.586156    4352 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0501 04:16:59.586156    4352 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0501 04:16:59.586224    4352 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:59.586224    4352 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0501 04:16:59.586224    4352 command_runner.go:130] >   default                     busybox-fc5497c4f-tbxxx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	I0501 04:16:59.586224    4352 command_runner.go:130] >   kube-system                 kindnet-gzz7p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0501 04:16:59.586283    4352 command_runner.go:130] >   kube-system                 kube-proxy-rlzp8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0501 04:16:59.586283    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:59.586283    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:59.586283    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:59.586283    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:59.586283    4352 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0501 04:16:59.586359    4352 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0501 04:16:59.586359    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:59.586359    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:59.586359    4352 command_runner.go:130] > Events:
	I0501 04:16:59.586359    4352 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0501 04:16:59.586415    4352 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0501 04:16:59.586415    4352 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0501 04:16:59.586415    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	I0501 04:16:59.586415    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.586475    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	I0501 04:16:59.586475    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.586532    4352 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:59.586532    4352 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	I0501 04:16:59.586532    4352 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	I0501 04:16:59.586590    4352 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	I0501 04:16:59.586590    4352 command_runner.go:130] > Name:               multinode-289800-m03
	I0501 04:16:59.586590    4352 command_runner.go:130] > Roles:              <none>
	I0501 04:16:59.586590    4352 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0501 04:16:59.586643    4352 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0501 04:16:59.586643    4352 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0501 04:16:59.586643    4352 command_runner.go:130] >                     kubernetes.io/hostname=multinode-289800-m03
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     kubernetes.io/os=linux
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/name=multinode-289800
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0501 04:16:59.586701    4352 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	I0501 04:16:59.586756    4352 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0501 04:16:59.586756    4352 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0501 04:16:59.586756    4352 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0501 04:16:59.586756    4352 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0501 04:16:59.586814    4352 command_runner.go:130] > CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	I0501 04:16:59.586814    4352 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0501 04:16:59.586814    4352 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0501 04:16:59.586814    4352 command_runner.go:130] > Unschedulable:      false
	I0501 04:16:59.586814    4352 command_runner.go:130] > Lease:
	I0501 04:16:59.586868    4352 command_runner.go:130] >   HolderIdentity:  multinode-289800-m03
	I0501 04:16:59.586868    4352 command_runner.go:130] >   AcquireTime:     <unset>
	I0501 04:16:59.586868    4352 command_runner.go:130] >   RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	I0501 04:16:59.586868    4352 command_runner.go:130] > Conditions:
	I0501 04:16:59.586868    4352 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0501 04:16:59.586924    4352 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0501 04:16:59.586924    4352 command_runner.go:130] >   MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.586978    4352 command_runner.go:130] >   DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.586978    4352 command_runner.go:130] >   PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.586978    4352 command_runner.go:130] >   Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0501 04:16:59.587035    4352 command_runner.go:130] > Addresses:
	I0501 04:16:59.587035    4352 command_runner.go:130] >   InternalIP:  172.28.223.145
	I0501 04:16:59.587035    4352 command_runner.go:130] >   Hostname:    multinode-289800-m03
	I0501 04:16:59.587035    4352 command_runner.go:130] > Capacity:
	I0501 04:16:59.587035    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.587035    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.587090    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.587090    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.587090    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.587090    4352 command_runner.go:130] > Allocatable:
	I0501 04:16:59.587090    4352 command_runner.go:130] >   cpu:                2
	I0501 04:16:59.587148    4352 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0501 04:16:59.587148    4352 command_runner.go:130] >   hugepages-2Mi:      0
	I0501 04:16:59.587148    4352 command_runner.go:130] >   memory:             2164264Ki
	I0501 04:16:59.587148    4352 command_runner.go:130] >   pods:               110
	I0501 04:16:59.587148    4352 command_runner.go:130] > System Info:
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	I0501 04:16:59.587216    4352 command_runner.go:130] >   System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Kernel Version:             5.10.207
	I0501 04:16:59.587216    4352 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0501 04:16:59.587216    4352 command_runner.go:130] >   Operating System:           linux
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Architecture:               amd64
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0501 04:16:59.587278    4352 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0501 04:16:59.587278    4352 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0501 04:16:59.587330    4352 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0501 04:16:59.587330    4352 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0501 04:16:59.587330    4352 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0501 04:16:59.587330    4352 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0501 04:16:59.587330    4352 command_runner.go:130] >   kube-system                 kindnet-4m5vg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	I0501 04:16:59.587432    4352 command_runner.go:130] >   kube-system                 kube-proxy-g8mbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	I0501 04:16:59.587432    4352 command_runner.go:130] > Allocated resources:
	I0501 04:16:59.587432    4352 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0501 04:16:59.587432    4352 command_runner.go:130] >   Resource           Requests   Limits
	I0501 04:16:59.587432    4352 command_runner.go:130] >   --------           --------   ------
	I0501 04:16:59.587432    4352 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0501 04:16:59.587487    4352 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0501 04:16:59.587487    4352 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:59.587487    4352 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0501 04:16:59.587487    4352 command_runner.go:130] > Events:
	I0501 04:16:59.587550    4352 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0501 04:16:59.587607    4352 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0501 04:16:59.587607    4352 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0501 04:16:59.587607    4352 command_runner.go:130] >   Normal  Starting                 5m52s                  kube-proxy       
	I0501 04:16:59.587607    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.587740    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:59.587740    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.587740    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:59.587804    4352 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:59.587804    4352 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m55s (x2 over 5m55s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	I0501 04:16:59.587804    4352 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m55s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	I0501 04:16:59.587857    4352 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m55s (x2 over 5m55s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	I0501 04:16:59.587857    4352 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	I0501 04:16:59.587857    4352 command_runner.go:130] >   Normal  RegisteredNode           5m50s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:59.587916    4352 command_runner.go:130] >   Normal  NodeReady                5m48s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	I0501 04:16:59.587916    4352 command_runner.go:130] >   Normal  NodeNotReady             4m10s                  node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	I0501 04:16:59.587916    4352 command_runner.go:130] >   Normal  RegisteredNode           64s                    node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	I0501 04:16:59.599470    4352 logs.go:123] Gathering logs for kube-scheduler [06f1f84bfde1] ...
	I0501 04:16:59.599470    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 06f1f84bfde1"
	I0501 04:16:59.629820    4352 command_runner.go:130] ! I0501 03:52:10.476758       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.629820    4352 command_runner.go:130] ! W0501 03:52:12.175400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:59.630769    4352 command_runner.go:130] ! W0501 03:52:12.175551       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.630848    4352 command_runner.go:130] ! W0501 03:52:12.175587       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:59.630888    4352 command_runner.go:130] ! W0501 03:52:12.175678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:59.630912    4352 command_runner.go:130] ! I0501 03:52:12.246151       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:59.630934    4352 command_runner.go:130] ! I0501 03:52:12.246312       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.630934    4352 command_runner.go:130] ! I0501 03:52:12.251800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:59.630934    4352 command_runner.go:130] ! I0501 03:52:12.252170       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.630976    4352 command_runner.go:130] ! I0501 03:52:12.253709       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.630976    4352 command_runner.go:130] ! I0501 03:52:12.254160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:59.630976    4352 command_runner.go:130] ! W0501 03:52:12.257352       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.631041    4352 command_runner.go:130] ! E0501 03:52:12.257411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.631100    4352 command_runner.go:130] ! W0501 03:52:12.261549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.631124    4352 command_runner.go:130] ! E0501 03:52:12.261670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! W0501 03:52:12.263856       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.631152    4352 command_runner.go:130] ! E0501 03:52:12.263906       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.631152    4352 command_runner.go:130] ! W0501 03:52:12.270038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! E0501 03:52:12.270597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! W0501 03:52:12.271080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631152    4352 command_runner.go:130] ! E0501 03:52:12.271309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.631690    4352 command_runner.go:130] ! W0501 03:52:12.271808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.631690    4352 command_runner.go:130] ! E0501 03:52:12.272098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.631785    4352 command_runner.go:130] ! W0501 03:52:12.272396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.631785    4352 command_runner.go:130] ! W0501 03:52:12.273177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.631785    4352 command_runner.go:130] ! E0501 03:52:12.273396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.631905    4352 command_runner.go:130] ! W0501 03:52:12.273765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.631905    4352 command_runner.go:130] ! E0501 03:52:12.273964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.632039    4352 command_runner.go:130] ! W0501 03:52:12.274273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632039    4352 command_runner.go:130] ! E0501 03:52:12.274741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632096    4352 command_runner.go:130] ! E0501 03:52:12.275083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.632141    4352 command_runner.go:130] ! W0501 03:52:12.275448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632192    4352 command_runner.go:130] ! E0501 03:52:12.275752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632245    4352 command_runner.go:130] ! W0501 03:52:12.276841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632295    4352 command_runner.go:130] ! E0501 03:52:12.278071       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632348    4352 command_runner.go:130] ! W0501 03:52:12.277438       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.632447    4352 command_runner.go:130] ! E0501 03:52:12.278555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:12.279824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:12.280326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:12.280272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:12.280893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.100723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.101238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.102451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.102804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.188414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.188662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.194299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632476    4352 command_runner.go:130] ! E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0501 04:16:59.632996    4352 command_runner.go:130] ! W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.633046    4352 command_runner.go:130] ! E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 04:16:59.633046    4352 command_runner.go:130] ! W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.633046    4352 command_runner.go:130] ! E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.633046    4352 command_runner.go:130] ! W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633201    4352 command_runner.go:130] ! E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0501 04:16:59.633228    4352 command_runner.go:130] ! W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633759    4352 command_runner.go:130] ! E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0501 04:16:59.633807    4352 command_runner.go:130] ! W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.633874    4352 command_runner.go:130] ! E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0501 04:16:59.633874    4352 command_runner.go:130] ! W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.633874    4352 command_runner.go:130] ! E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 04:16:59.633930    4352 command_runner.go:130] ! I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.633953    4352 command_runner.go:130] ! E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	I0501 04:16:59.645635    4352 logs.go:123] Gathering logs for coredns [8a0208aeafcf] ...
	I0501 04:16:59.645635    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a0208aeafcf"
	I0501 04:16:59.676846    4352 command_runner.go:130] > .:53
	I0501 04:16:59.676914    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:59.676914    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:59.676914    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:59.676914    4352 command_runner.go:130] > [INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	I0501 04:16:59.677647    4352 logs.go:123] Gathering logs for coredns [15c4496e3a9f] ...
	I0501 04:16:59.677749    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15c4496e3a9f"
	I0501 04:16:59.713229    4352 command_runner.go:130] > .:53
	I0501 04:16:59.713339    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:59.713339    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:59.713339    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:59.713339    4352 command_runner.go:130] > [INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	I0501 04:16:59.713568    4352 command_runner.go:130] > [INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	I0501 04:16:59.713642    4352 command_runner.go:130] > [INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	I0501 04:16:59.713885    4352 command_runner.go:130] > [INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	I0501 04:16:59.713885    4352 command_runner.go:130] > [INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	I0501 04:16:59.713934    4352 command_runner.go:130] > [INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	I0501 04:16:59.713956    4352 command_runner.go:130] > [INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	I0501 04:16:59.713956    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:59.713956    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:59.715255    4352 logs.go:123] Gathering logs for coredns [3e8d5ff9a9e4] ...
	I0501 04:16:59.715255    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8d5ff9a9e4"
	I0501 04:16:59.747892    4352 command_runner.go:130] > .:53
	I0501 04:16:59.748016    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:16:59.748016    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:16:59.748016    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:16:59.748016    4352 command_runner.go:130] > [INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	I0501 04:16:59.748016    4352 command_runner.go:130] > [INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	I0501 04:16:59.748185    4352 command_runner.go:130] > [INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	I0501 04:16:59.748185    4352 command_runner.go:130] > [INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	I0501 04:16:59.748185    4352 command_runner.go:130] > [INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	I0501 04:16:59.748353    4352 command_runner.go:130] > [INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	I0501 04:16:59.748452    4352 command_runner.go:130] > [INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	I0501 04:16:59.748472    4352 command_runner.go:130] > [INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	I0501 04:16:59.748567    4352 command_runner.go:130] > [INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	I0501 04:16:59.748671    4352 command_runner.go:130] > [INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	I0501 04:16:59.748696    4352 command_runner.go:130] > [INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	I0501 04:16:59.748696    4352 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0501 04:16:59.748696    4352 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0501 04:16:59.750256    4352 logs.go:123] Gathering logs for kube-proxy [3efcc92f817e] ...
	I0501 04:16:59.750256    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3efcc92f817e"
	I0501 04:16:59.781847    4352 command_runner.go:130] ! I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:16:59.782334    4352 command_runner.go:130] ! I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:16:59.782462    4352 command_runner.go:130] ! I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:16:59.782506    4352 command_runner.go:130] ! I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:16:59.782556    4352 command_runner.go:130] ! I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:16:59.782595    4352 command_runner.go:130] ! I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:16:59.782618    4352 command_runner.go:130] ! I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:16:59.784841    4352 logs.go:123] Gathering logs for kube-scheduler [eaf69fce5ee3] ...
	I0501 04:16:59.784841    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eaf69fce5ee3"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0501 04:16:59.811849    4352 command_runner.go:130] ! W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:16:59.811849    4352 command_runner.go:130] ! I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.812842    4352 command_runner.go:130] ! I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:16:59.812842    4352 command_runner.go:130] ! I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.812842    4352 command_runner.go:130] ! I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:16:59.814835    4352 logs.go:123] Gathering logs for kube-controller-manager [66a1b89e6733] ...
	I0501 04:16:59.814835    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a1b89e6733"
	I0501 04:16:59.844871    4352 command_runner.go:130] ! I0501 04:15:39.740014       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.254324       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.254368       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.263842       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.264273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.265102       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:40.265435       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.845256    4352 command_runner.go:130] ! I0501 04:15:44.420436       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:59.845407    4352 command_runner.go:130] ! I0501 04:15:44.421597       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:59.845407    4352 command_runner.go:130] ! I0501 04:15:44.430683       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.430949       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.431056       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.437281       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.440408       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.437711       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.440933       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.450877       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.452935       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.452958       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.458231       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.458525       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.458548       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.467611       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.468036       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.468093       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.468107       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:59.845487    4352 command_runner.go:130] ! I0501 04:15:44.484825       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:59.845812    4352 command_runner.go:130] ! I0501 04:15:44.484856       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.845812    4352 command_runner.go:130] ! I0501 04:15:44.484892       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485128       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485186       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485221       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485246       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.485322       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.845849    4352 command_runner.go:130] ! I0501 04:15:44.488601       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.488943       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.488958       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.488985       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.846024    4352 command_runner.go:130] ! I0501 04:15:44.523143       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:59.846100    4352 command_runner.go:130] ! I0501 04:15:44.644894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:59.846100    4352 command_runner.go:130] ! I0501 04:15:44.645016       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:59.846100    4352 command_runner.go:130] ! I0501 04:15:44.645088       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:59.846164    4352 command_runner.go:130] ! I0501 04:15:44.645112       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:59.846164    4352 command_runner.go:130] ! I0501 04:15:44.646888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:59.846164    4352 command_runner.go:130] ! W0501 04:15:44.646984       1 shared_informer.go:597] resyncPeriod 15h44m19.234758052s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:59.846164    4352 command_runner.go:130] ! W0501 04:15:44.647035       1 shared_informer.go:597] resyncPeriod 17h52m42.538614251s is smaller than resyncCheckPeriod 17h55m23.133739358s and the informer has already started. Changing it to 17h55m23.133739358s
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647224       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647325       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647389       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.647580       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648269       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648387       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648418       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648519       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648582       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648601       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648633       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.648662       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.649971       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.649999       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.650094       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.658545       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.664070       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.664109       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.672333       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.672648       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.673224       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:59.846270    4352 command_runner.go:130] ! E0501 04:15:44.680086       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.680207       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:59.846270    4352 command_runner.go:130] ! I0501 04:15:44.686271       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.687804       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.688087       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.691064       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.694139       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.694154       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.697309       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.697808       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.698725       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.709020       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.709557       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.718572       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.718866       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731386       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731502       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731520       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.731794       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.732008       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.732024       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.732060       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.739601       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.741937       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.742091       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.751335       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.758177       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.767021       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.776399       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.777830       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.780031       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.783346       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.784386       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.784668       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.790586       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.791028       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.791148       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.795072       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.795486       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.796321       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.806964       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.807399       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.808302       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.810677       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.811276       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.812128       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.814338       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.814699       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.815465       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.818437       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.819004       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.818976       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.820305       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.820518       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.822359       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.824878       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.825167       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:59.846843    4352 command_runner.go:130] ! I0501 04:15:44.835687       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.835705       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.835739       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.836623       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! E0501 04:15:44.845522       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.845590       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.975590       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:44.975737       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.026863       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.026966       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.026980       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.188029       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.191154       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.191606       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.234916       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.235592       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.235855       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.275946       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.276219       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.277151       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:45.277668       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347039       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347226       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347657       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.347697       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.351170       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.351453       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.351701       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.352658       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.355868       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.356195       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.356581       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.373530       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.375966       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.376087       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.376099       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.381581       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.387752       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.398512       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.398855       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.433745       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.433841       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.434861       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.437855       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438314       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438445       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.438531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.441880       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:59.847858    4352 command_runner.go:130] ! I0501 04:15:55.442281       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448289       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448378       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448532       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448564       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.448615       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.452662       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.453060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.453136       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.459094       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.465378       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.468998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.476103       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.479405       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.480400       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.485347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.485423       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.485459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.488987       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.489270       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.492066       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.492447       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.494972       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.497059       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.499153       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.499594       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.509506       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.513444       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.517356       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.519269       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.521379       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.527109       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.533712       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.534052       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562220       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562294       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562374       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:59.848849    4352 command_runner.go:130] ! I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	I0501 04:16:59.865835    4352 logs.go:123] Gathering logs for kube-controller-manager [4b62556f40be] ...
	I0501 04:16:59.865835    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b62556f40be"
	I0501 04:16:59.904905    4352 command_runner.go:130] ! I0501 03:52:09.899238       1 serving.go:380] Generated self-signed cert in-memory
	I0501 04:16:59.904905    4352 command_runner.go:130] ! I0501 03:52:10.399398       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.399463       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.408364       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.409326       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.409600       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:16:59.905408    4352 command_runner.go:130] ! I0501 03:52:10.409803       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.177592       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.177638       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.223373       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.223482       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.224504       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.255847       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.268264       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.268388       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.282022       1 shared_informer.go:320] Caches are synced for tokens
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.318646       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.318861       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.319086       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.319104       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.319092       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.340327       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.340404       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.340939       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.388809       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.389274       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.389544       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.409254       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.409799       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.410052       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0501 04:16:59.905491    4352 command_runner.go:130] ! I0501 03:52:15.410231       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0501 04:16:59.906108    4352 command_runner.go:130] ! I0501 03:52:15.430420       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0501 04:16:59.906164    4352 command_runner.go:130] ! I0501 03:52:15.432551       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0501 04:16:59.906164    4352 command_runner.go:130] ! I0501 03:52:15.432922       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0501 04:16:59.906164    4352 command_runner.go:130] ! I0501 03:52:15.433117       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0501 04:16:59.906224    4352 command_runner.go:130] ! E0501 03:52:15.460293       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.460569       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.483810       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.484552       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0501 04:16:59.906294    4352 command_runner.go:130] ! I0501 03:52:15.487659       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.507112       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.507311       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.507323       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547225       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547300       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547313       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.547413       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.652954       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.653222       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.653240       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940199       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940714       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940771       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.940787       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941029       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941118       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941300       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0501 04:16:59.906366    4352 command_runner.go:130] ! I0501 03:52:15.941320       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0501 04:16:59.906897    4352 command_runner.go:130] ! I0501 03:52:15.941344       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0501 04:16:59.906897    4352 command_runner.go:130] ! I0501 03:52:15.941368       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0501 04:16:59.906951    4352 command_runner.go:130] ! I0501 03:52:15.941386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0501 04:16:59.906951    4352 command_runner.go:130] ! I0501 03:52:15.941421       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0501 04:16:59.907011    4352 command_runner.go:130] ! I0501 03:52:15.941561       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0501 04:16:59.907011    4352 command_runner.go:130] ! I0501 03:52:15.941606       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0501 04:16:59.907011    4352 command_runner.go:130] ! I0501 03:52:15.941627       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0501 04:16:59.907079    4352 command_runner.go:130] ! I0501 03:52:15.941813       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0501 04:16:59.907079    4352 command_runner.go:130] ! I0501 03:52:15.942150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942270       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942319       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942400       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0501 04:16:59.907137    4352 command_runner.go:130] ! I0501 03:52:15.942767       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:15.942791       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.183841       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.184178       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.187151       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.907204    4352 command_runner.go:130] ! I0501 03:52:16.187185       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0501 04:16:59.907276    4352 command_runner.go:130] ! I0501 03:52:16.436175       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0501 04:16:59.907276    4352 command_runner.go:130] ! I0501 03:52:16.436331       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.436346       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.586198       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.586602       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.586642       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0501 04:16:59.907357    4352 command_runner.go:130] ! I0501 03:52:16.736534       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0501 04:16:59.907434    4352 command_runner.go:130] ! I0501 03:52:16.736573       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0501 04:16:59.907434    4352 command_runner.go:130] ! I0501 03:52:16.736609       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0501 04:16:59.907504    4352 command_runner.go:130] ! I0501 03:52:16.736694       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0501 04:16:59.907504    4352 command_runner.go:130] ! I0501 03:52:16.736706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0501 04:16:59.907504    4352 command_runner.go:130] ! I0501 03:52:16.891482       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0501 04:16:59.907575    4352 command_runner.go:130] ! I0501 03:52:16.891648       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0501 04:16:59.907575    4352 command_runner.go:130] ! I0501 03:52:16.891663       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0501 04:16:59.907575    4352 command_runner.go:130] ! I0501 03:52:17.047956       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050852       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050877       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050942       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0501 04:16:59.907643    4352 command_runner.go:130] ! I0501 03:52:17.050952       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0501 04:16:59.907717    4352 command_runner.go:130] ! I0501 03:52:17.051046       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0501 04:16:59.907717    4352 command_runner.go:130] ! I0501 03:52:17.051073       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.907717    4352 command_runner.go:130] ! I0501 03:52:17.051107       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0501 04:16:59.907781    4352 command_runner.go:130] ! I0501 03:52:17.051130       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.907781    4352 command_runner.go:130] ! I0501 03:52:17.051145       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907781    4352 command_runner.go:130] ! I0501 03:52:17.051309       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.051548       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.051654       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.186932       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0501 04:16:59.907840    4352 command_runner.go:130] ! I0501 03:52:17.187092       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0501 04:16:59.908020    4352 command_runner.go:130] ! I0501 03:52:27.350786       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0501 04:16:59.908085    4352 command_runner.go:130] ! I0501 03:52:27.351166       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0501 04:16:59.908142    4352 command_runner.go:130] ! I0501 03:52:27.352026       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0501 04:16:59.908142    4352 command_runner.go:130] ! I0501 03:52:27.353715       1 shared_informer.go:313] Waiting for caches to sync for node
	I0501 04:16:59.908142    4352 command_runner.go:130] ! I0501 03:52:27.368884       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0501 04:16:59.908194    4352 command_runner.go:130] ! I0501 03:52:27.369241       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0501 04:16:59.908194    4352 command_runner.go:130] ! I0501 03:52:27.369602       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0501 04:16:59.908194    4352 command_runner.go:130] ! I0501 03:52:27.424182       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.424472       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.436663       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.437080       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.437177       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.448635       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0501 04:16:59.908244    4352 command_runner.go:130] ! I0501 03:52:27.449170       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0501 04:16:59.908325    4352 command_runner.go:130] ! I0501 03:52:27.449409       1 shared_informer.go:313] Waiting for caches to sync for job
	I0501 04:16:59.908325    4352 command_runner.go:130] ! I0501 03:52:27.475565       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.908357    4352 command_runner.go:130] ! I0501 03:52:27.476051       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0501 04:16:59.908357    4352 command_runner.go:130] ! I0501 03:52:27.476166       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0501 04:16:59.908357    4352 command_runner.go:130] ! I0501 03:52:27.479486       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0501 04:16:59.908433    4352 command_runner.go:130] ! I0501 03:52:27.479596       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0501 04:16:59.908464    4352 command_runner.go:130] ! I0501 03:52:27.479975       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0501 04:16:59.908464    4352 command_runner.go:130] ! I0501 03:52:27.480750       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0501 04:16:59.908464    4352 command_runner.go:130] ! I0501 03:52:27.480823       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0501 04:16:59.908464    4352 command_runner.go:130] ! E0501 03:52:27.482546       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.483210       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.495640       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.495973       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.496212       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0501 04:16:59.908534    4352 command_runner.go:130] ! I0501 03:52:27.512223       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0501 04:16:59.908625    4352 command_runner.go:130] ! I0501 03:52:27.512895       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0501 04:16:59.908625    4352 command_runner.go:130] ! I0501 03:52:27.513075       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0501 04:16:59.908666    4352 command_runner.go:130] ! I0501 03:52:27.514982       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0501 04:16:59.908666    4352 command_runner.go:130] ! I0501 03:52:27.515311       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0501 04:16:59.908666    4352 command_runner.go:130] ! I0501 03:52:27.515499       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0501 04:16:59.908739    4352 command_runner.go:130] ! I0501 03:52:27.526940       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0501 04:16:59.908739    4352 command_runner.go:130] ! I0501 03:52:27.527318       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0501 04:16:59.908770    4352 command_runner.go:130] ! I0501 03:52:27.527351       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0501 04:16:59.908770    4352 command_runner.go:130] ! I0501 03:52:27.647646       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0501 04:16:59.908770    4352 command_runner.go:130] ! I0501 03:52:27.647752       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0501 04:16:59.908838    4352 command_runner.go:130] ! I0501 03:52:27.647825       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0501 04:16:59.908838    4352 command_runner.go:130] ! I0501 03:52:27.647836       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0501 04:16:59.908876    4352 command_runner.go:130] ! I0501 03:52:27.692531       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0501 04:16:59.908876    4352 command_runner.go:130] ! I0501 03:52:27.692762       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0501 04:16:59.908957    4352 command_runner.go:130] ! I0501 03:52:27.693221       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0501 04:16:59.908982    4352 command_runner.go:130] ! I0501 03:52:27.693310       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0501 04:16:59.908982    4352 command_runner.go:130] ! I0501 03:52:27.846904       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0501 04:16:59.909026    4352 command_runner.go:130] ! I0501 03:52:27.847065       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0501 04:16:59.909026    4352 command_runner.go:130] ! I0501 03:52:27.847083       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:27.996304       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:27.996661       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:27.996720       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:28.149439       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0501 04:16:59.909065    4352 command_runner.go:130] ! I0501 03:52:28.149690       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0501 04:16:59.909152    4352 command_runner.go:130] ! I0501 03:52:28.149796       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.194448       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.194582       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.346263       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0501 04:16:59.909183    4352 command_runner.go:130] ! I0501 03:52:28.351074       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0501 04:16:59.909262    4352 command_runner.go:130] ! I0501 03:52:28.351267       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0501 04:16:59.909262    4352 command_runner.go:130] ! I0501 03:52:28.389327       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0501 04:16:59.909262    4352 command_runner.go:130] ! I0501 03:52:28.399508       1 shared_informer.go:320] Caches are synced for expand
	I0501 04:16:59.909301    4352 command_runner.go:130] ! I0501 03:52:28.401911       1 shared_informer.go:320] Caches are synced for namespace
	I0501 04:16:59.909301    4352 command_runner.go:130] ! I0501 03:52:28.402772       1 shared_informer.go:320] Caches are synced for service account
	I0501 04:16:59.909301    4352 command_runner.go:130] ! I0501 03:52:28.414043       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 04:16:59.909351    4352 command_runner.go:130] ! I0501 03:52:28.415874       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0501 04:16:59.909391    4352 command_runner.go:130] ! I0501 03:52:28.427291       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 04:16:59.909391    4352 command_runner.go:130] ! I0501 03:52:28.436570       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.437221       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.437315       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.440984       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0501 04:16:59.909415    4352 command_runner.go:130] ! I0501 03:52:28.447483       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 04:16:59.909486    4352 command_runner.go:130] ! I0501 03:52:28.447500       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:16:59.909486    4352 command_runner.go:130] ! I0501 03:52:28.448218       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 04:16:59.909523    4352 command_runner.go:130] ! I0501 03:52:28.451115       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0501 04:16:59.909523    4352 command_runner.go:130] ! I0501 03:52:28.451167       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451224       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451726       1 shared_informer.go:320] Caches are synced for deployment
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451933       1 shared_informer.go:320] Caches are synced for job
	I0501 04:16:59.909562    4352 command_runner.go:130] ! I0501 03:52:28.451734       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 04:16:59.909634    4352 command_runner.go:130] ! I0501 03:52:28.470928       1 shared_informer.go:320] Caches are synced for ephemeral
	I0501 04:16:59.909634    4352 command_runner.go:130] ! I0501 03:52:28.476835       1 shared_informer.go:320] Caches are synced for HPA
	I0501 04:16:59.909674    4352 command_runner.go:130] ! I0501 03:52:28.486851       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 04:16:59.909674    4352 command_runner.go:130] ! I0501 03:52:28.487294       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0501 04:16:59.909674    4352 command_runner.go:130] ! I0501 03:52:28.507418       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.510921       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.537591       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.575135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:16:59.909719    4352 command_runner.go:130] ! I0501 03:52:28.595083       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.909788    4352 command_runner.go:130] ! I0501 03:52:28.609954       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800\" does not exist"
	I0501 04:16:59.909825    4352 command_runner.go:130] ! I0501 03:52:28.621070       1 shared_informer.go:320] Caches are synced for TTL
	I0501 04:16:59.909825    4352 command_runner.go:130] ! I0501 03:52:28.625042       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.628085       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.643871       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.653497       1 shared_informer.go:320] Caches are synced for GC
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.654871       1 shared_informer.go:320] Caches are synced for node
	I0501 04:16:59.909863    4352 command_runner.go:130] ! I0501 03:52:28.654996       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.655710       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.655972       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.656192       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.675109       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800" podCIDRs=["10.244.0.0/24"]
	I0501 04:16:59.909951    4352 command_runner.go:130] ! I0501 03:52:28.682120       1 shared_informer.go:320] Caches are synced for taint
	I0501 04:16:59.910028    4352 command_runner.go:130] ! I0501 03:52:28.682644       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0501 04:16:59.910028    4352 command_runner.go:130] ! I0501 03:52:28.682782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800"
	I0501 04:16:59.910028    4352 command_runner.go:130] ! I0501 03:52:28.682855       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:28.688787       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:28.693874       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:28.697526       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.088696       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.088746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.139257       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.739066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="528.452632ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.796611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.235573ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.797135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="429.196µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:29.797745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.4µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.341653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.1µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.358462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.3µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.377150       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.9µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:39.403208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.2µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.593793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.7µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.686793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.969221ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.713891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.932914ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:41.714840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.4µs"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 04:16:59.910104    4352 command_runner.go:130] ! I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:59.910646    4352 command_runner.go:130] ! I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910786    4352 command_runner.go:130] ! I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:16:59.910786    4352 command_runner.go:130] ! I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:16:59.910841    4352 command_runner.go:130] ! I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910841    4352 command_runner.go:130] ! I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910841    4352 command_runner.go:130] ! I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910952    4352 command_runner.go:130] ! I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.910952    4352 command_runner.go:130] ! I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:16:59.911015    4352 command_runner.go:130] ! I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:16:59.911015    4352 command_runner.go:130] ! I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.911054    4352 command_runner.go:130] ! I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:59.930817    4352 logs.go:123] Gathering logs for kindnet [b7cae3f6b88b] ...
	I0501 04:16:59.930817    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b7cae3f6b88b"
	I0501 04:16:59.961646    4352 command_runner.go:130] ! I0501 04:15:45.341459       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0501 04:16:59.961646    4352 command_runner.go:130] ! I0501 04:15:45.342196       1 main.go:107] hostIP = 172.28.209.199
	I0501 04:16:59.962058    4352 command_runner.go:130] ! podIP = 172.28.209.199
	I0501 04:16:59.962058    4352 command_runner.go:130] ! I0501 04:15:45.343348       1 main.go:116] setting mtu 1500 for CNI 
	I0501 04:16:59.962058    4352 command_runner.go:130] ! I0501 04:15:45.343391       1 main.go:146] kindnetd IP family: "ipv4"
	I0501 04:16:59.962058    4352 command_runner.go:130] ! I0501 04:15:45.343412       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.765193       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.817499       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.817549       1 main.go:227] handling current node
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.818026       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962115    4352 command_runner.go:130] ! I0501 04:16:15.818042       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962226    4352 command_runner.go:130] ! I0501 04:16:15.818289       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.219.162 Flags: [] Table: 0} 
	I0501 04:16:59.962226    4352 command_runner.go:130] ! I0501 04:16:15.818416       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962270    4352 command_runner.go:130] ! I0501 04:16:15.818477       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962270    4352 command_runner.go:130] ! I0501 04:16:15.818548       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:16:59.962270    4352 command_runner.go:130] ! I0501 04:16:25.834949       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962325    4352 command_runner.go:130] ! I0501 04:16:25.834995       1 main.go:227] handling current node
	I0501 04:16:59.962325    4352 command_runner.go:130] ! I0501 04:16:25.835008       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962366    4352 command_runner.go:130] ! I0501 04:16:25.835016       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962622    4352 command_runner.go:130] ! I0501 04:16:25.835192       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962675    4352 command_runner.go:130] ! I0501 04:16:25.835220       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962718    4352 command_runner.go:130] ! I0501 04:16:35.845752       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962718    4352 command_runner.go:130] ! I0501 04:16:35.845835       1 main.go:227] handling current node
	I0501 04:16:59.962718    4352 command_runner.go:130] ! I0501 04:16:35.845848       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962775    4352 command_runner.go:130] ! I0501 04:16:35.845856       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962775    4352 command_runner.go:130] ! I0501 04:16:35.846322       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962775    4352 command_runner.go:130] ! I0501 04:16:35.846423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855212       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855323       1 main.go:227] handling current node
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855339       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.855347       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.962827    4352 command_runner.go:130] ! I0501 04:16:45.856266       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.962889    4352 command_runner.go:130] ! I0501 04:16:45.856305       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.962889    4352 command_runner.go:130] ! I0501 04:16:55.872191       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:16:59.962932    4352 command_runner.go:130] ! I0501 04:16:55.872239       1 main.go:227] handling current node
	I0501 04:16:59.962932    4352 command_runner.go:130] ! I0501 04:16:55.872253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:16:59.962932    4352 command_runner.go:130] ! I0501 04:16:55.872260       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:16:59.963000    4352 command_runner.go:130] ! I0501 04:16:55.872517       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:16:59.963000    4352 command_runner.go:130] ! I0501 04:16:55.872553       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:16:59.965772    4352 logs.go:123] Gathering logs for Docker ...
	I0501 04:16:59.965772    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.007260    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:08 minikube cri-dockerd[222]: time="2024-05-01T04:14:08Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:09 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0501 04:17:00.007385    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.007481    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube cri-dockerd[414]: time="2024-05-01T04:14:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007546    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0501 04:17:00.007637    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.007667    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.007758    4352 command_runner.go:130] > May 01 04:14:13 minikube cri-dockerd[423]: time="2024-05-01T04:14:13Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:13 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.653438562Z" level=info msg="Starting up"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.657791992Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[651]: time="2024-05-01T04:14:59.663198880Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=657
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.702542137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732549261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732711054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732864148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.732947945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734019203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.734463486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735002764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735178358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735234755Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735254555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.735695937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.736590002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739236298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.007790    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739286896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.008356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739479489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.008356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.739575785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:17:00.008356    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740111064Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740186861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.740203361Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747848861Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.747973456Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748003155Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:17:00.008466    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748021254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:17:00.008616    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748087351Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:17:00.008616    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748176348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:17:00.008616    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748553033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.008682    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748726426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.008682    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748830822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:17:00.008745    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748853521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:17:00.008745    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748872121Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008745    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748887020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008807    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748901420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008807    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748916819Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008807    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748932318Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008872    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748946618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008872    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748960717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008872    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748974817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.748996916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749013215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749071613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749094412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749109411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749127511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749141410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749156310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749171209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.008941    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749188008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009107    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749210407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009107    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749227507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009107    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749241106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009179    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749261705Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:17:00.009179    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749287004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009179    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749377501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009245    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749401900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:17:00.009245    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749458198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:17:00.009309    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749553894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:17:00.009309    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749626691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:17:00.009478    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749759886Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:17:00.009543    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749839283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749953278Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.749974077Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750421860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.750811045Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751024636Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:14:59 multinode-289800 dockerd[657]: time="2024-05-01T04:14:59.751103833Z" level=info msg="containerd successfully booted in 0.052926s"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.725111442Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:00 multinode-289800 dockerd[651]: time="2024-05-01T04:15:00.993003995Z" level=info msg="Loading containers: start."
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.418709237Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.511990518Z" level=info msg="Loading containers: done."
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.539659513Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.540534438Z" level=info msg="Daemon has completed initialization"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.598935417Z" level=info msg="API listen on [::]:2376"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:01 multinode-289800 dockerd[651]: time="2024-05-01T04:15:01.599463032Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.764446334Z" level=info msg="Processing signal 'terminated'"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 systemd[1]: Stopping Docker Application Container Engine...
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766325752Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766547266Z" level=info msg="Daemon shutdown complete"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766599570Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:27 multinode-289800 dockerd[651]: time="2024-05-01T04:15:27.766627071Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: docker.service: Deactivated successfully.
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Stopped Docker Application Container Engine.
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 systemd[1]: Starting Docker Application Container Engine...
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.848356633Z" level=info msg="Starting up"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.852105170Z" level=info msg="containerd not running, starting managed containerd"
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:28.856097222Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1051
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.886653253Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918280652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0501 04:17:00.009579    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918435561Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0501 04:17:00.010124    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918674977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0501 04:17:00.010124    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918835587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.918914392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919007298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919224411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919342019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919363920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0501 04:17:00.010188    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919374921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010328    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919401422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010328    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.919522430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010328    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922355909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010417    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922472116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0501 04:17:00.010417    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922606725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922701131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922740333Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922844740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0501 04:17:00.010476    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.922863441Z" level=info msg="metadata content store policy set" policy=shared
	I0501 04:17:00.010558    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923199662Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0501 04:17:00.010558    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923345572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0501 04:17:00.010558    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923371973Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0501 04:17:00.010625    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923387074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0501 04:17:00.010625    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923416076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0501 04:17:00.010625    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923482380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0501 04:17:00.010693    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923717595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.010732    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.923914208Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0501 04:17:00.010756    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924012314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0501 04:17:00.010756    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924084218Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924103120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924116520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924137922Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924154823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010802    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924172824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010919    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924195925Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010919    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924208026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010985    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924219327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0501 04:17:00.010985    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924257229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.010985    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924272330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924285031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924297632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011053    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924325534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924337534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011120    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924348235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011187    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924360536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011187    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924373137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011187    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924390538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011255    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924403039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011255    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924414139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011315    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924426140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011315    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924440741Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0501 04:17:00.011315    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924459642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.011382    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924475143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924504745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924545247Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924640554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924658655Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924671555Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924736560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924890569Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.924908370Z" level=info msg="NRI interface is disabled by configuration."
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925252392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925540810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925606615Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:28 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:28.925720522Z" level=info msg="containerd successfully booted in 0.040328s"
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.902259635Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0501 04:17:00.014508    4352 command_runner.go:130] > May 01 04:15:29 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:29.938734241Z" level=info msg="Loading containers: start."
	I0501 04:17:00.015064    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.252276255Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0501 04:17:00.015064    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.346319398Z" level=info msg="Loading containers: done."
	I0501 04:17:00.015112    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374198460Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0501 04:17:00.015112    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.374439776Z" level=info msg="Daemon has completed initialization"
	I0501 04:17:00.015154    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424572544Z" level=info msg="API listen on [::]:2376"
	I0501 04:17:00.015154    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 dockerd[1045]: time="2024-05-01T04:15:30.424740154Z" level=info msg="API listen on /var/run/docker.sock"
	I0501 04:17:00.015154    4352 command_runner.go:130] > May 01 04:15:30 multinode-289800 systemd[1]: Started Docker Application Container Engine.
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start docker client with request timeout 0s"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0501 04:17:00.015238    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Loaded network plugin cni"
	I0501 04:17:00.015360    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0501 04:17:00.015360    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:31 multinode-289800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-8w9hq_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88\""
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-cc6mk_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5\""
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-x9zrw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193\""
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.812954162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813140474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813251281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.813750813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908552604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908932028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.908977330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:37.909354354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8e27176eab83655d3f2a52c63326669ef8c796c68155930f53f421789d826f1/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022633513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022720619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.022735220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.024008700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032046108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015402    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032104212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015937    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032117713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.032205718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3fd53aa8d8f5d6402b604adf1c8c8ae2b5a8c80b90e94152f45e7cb16a71fe46/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51e331e75da779107616d5efa0d497152d9c85407f1c172c9ae536bcc2b22bad/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.361204210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366294631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366382437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.366929671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427356590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.427966129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428178542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.428971092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563334483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.563717708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568278296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.568462908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619028803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619423228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.619676644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:38.620258481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.015984    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.647452681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648388440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.648703160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650660084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.650945902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.652733715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.653556567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703188303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703325612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.703348713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:43.704951615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160153282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160628512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.160751120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.016635    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.161166246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017174    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:15:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.017221    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303671652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017221    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.303759357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017292    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304597710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017292    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.304856126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623383256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623630372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.623719877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 dockerd[1051]: time="2024-05-01T04:15:44.624154405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1045]: time="2024-05-01T04:16:15.086534690Z" level=info msg="ignoring event" container=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087315924Z" level=info msg="shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.087789544Z" level=warning msg="cleaning up after shim disconnected" id=01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539 namespace=moby
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:15.089400515Z" level=info msg="cleaning up dead shim" namespace=moby
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233206077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233350185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.233373086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:29.235465402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.458837761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.459864323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464281891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.464897329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543149980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543283788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543320690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.543548404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598181021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017343    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.598854262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017881    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.599065375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017881    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:47.600816581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348/resolv.conf as [nameserver 172.28.208.1]"
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 cri-dockerd[1273]: time="2024-05-01T04:16:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b85f507755ab5fd65a5328f5567d969dd5f974c01ee4c5d8e38f03dc6ec900a2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.282921443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283150129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.283743193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.291296831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360201124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360588900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.360677995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.361100969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575166498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575320589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.575446381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 dockerd[1051]: time="2024-05-01T04:16:48.576248232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:51 multinode-289800 dockerd[1045]: 2024/05/01 04:16:51 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:52 multinode-289800 dockerd[1045]: 2024/05/01 04:16:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.017937    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.019404    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.019404    4352 command_runner.go:130] > May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0501 04:17:00.055426    4352 logs.go:123] Gathering logs for container status ...
	I0501 04:17:00.056434    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0501 04:17:00.124376    4352 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0501 04:17:00.124376    4352 command_runner.go:130] > 1efd236274eb6       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	I0501 04:17:00.124499    4352 command_runner.go:130] > b8a9b405d76be       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	I0501 04:17:00.124499    4352 command_runner.go:130] > 8a0208aeafcfe       cbb01a7bd410d                                                                                         12 seconds ago       Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	I0501 04:17:00.124499    4352 command_runner.go:130] > 239a5dfd3ae52       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	I0501 04:17:00.124499    4352 command_runner.go:130] > b7cae3f6b88bc       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	I0501 04:17:00.124605    4352 command_runner.go:130] > 01deddefba52a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	I0501 04:17:00.124605    4352 command_runner.go:130] > 3efcc92f817ee       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	I0501 04:17:00.124669    4352 command_runner.go:130] > 34892fdb68983       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	I0501 04:17:00.124750    4352 command_runner.go:130] > 18cd30f3ad28f       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	I0501 04:17:00.124750    4352 command_runner.go:130] > 66a1b89e6733f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	I0501 04:17:00.124810    4352 command_runner.go:130] > eaf69fce5ee36       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	I0501 04:17:00.124844    4352 command_runner.go:130] > 237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	I0501 04:17:00.124874    4352 command_runner.go:130] > 15c4496e3a9f0       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	I0501 04:17:00.124874    4352 command_runner.go:130] > 3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	I0501 04:17:00.124874    4352 command_runner.go:130] > 6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              24 minutes ago       Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	I0501 04:17:00.124981    4352 command_runner.go:130] > 502684407b0cf       a0bf559e280cf                                                                                         24 minutes ago       Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	I0501 04:17:00.124981    4352 command_runner.go:130] > 4b62556f40bec       c7aad43836fa5                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	I0501 04:17:00.124981    4352 command_runner.go:130] > 06f1f84bfde17       259c8277fcbbc                                                                                         24 minutes ago       Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	I0501 04:17:00.127772    4352 logs.go:123] Gathering logs for kubelet ...
	I0501 04:17:00.127772    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0501 04:17:00.160771    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161436    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875075    1383 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161436    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.875223    1383 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161436    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: I0501 04:15:32.876800    1383 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161532    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 kubelet[1383]: E0501 04:15:32.877636    1383 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:17:00.161532    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.161565    4352 command_runner.go:130] > May 01 04:15:32 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:17:00.161565    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0501 04:17:00.161603    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.593311    1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.595065    1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: I0501 04:15:33.597316    1424 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 kubelet[1424]: E0501 04:15:33.597441    1424 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:33 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327211    1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.327674    1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: I0501 04:15:34.328505    1461 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 kubelet[1461]: E0501 04:15:34.328669    1461 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:34 multinode-289800 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.796836    1525 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797219    1525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.797640    1525 server.go:927] "Client rotation is on, will bootstrap in background"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.799493    1525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.812278    1525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846443    1525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.846668    1525 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847577    1525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0501 04:17:00.161635    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.847671    1525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-289800","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848600    1525 topology_manager.go:138] "Creating topology manager with none policy"
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.848674    1525 container_manager_linux.go:301] "Creating device plugin manager"
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.849347    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:17:00.162180    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851250    1525 kubelet.go:400] "Attempting to sync node with API server"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851388    1525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.851480    1525 kubelet.go:312] "Adding apiserver pod source"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.852014    1525 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.863109    1525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.868847    1525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869729    1525 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.870640    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.871055    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.869620    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.872992    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872208    1525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.874268    1525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.872162    1525 server.go:1264] "Started kubelet"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.876600    1525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.878390    1525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.882899    1525 server.go:455] "Adding debug handlers to kubelet server"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.888275    1525 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.209.199:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-289800.17cb4242948ce646  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-289800,UID:multinode-289800,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-289800,},FirstTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,LastTimestamp:2024-05-01 04:15:36.872142406 +0000 UTC m=+0.158641226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-2
89800,}"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.894478    1525 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.899264    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="200ms"
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900556    1525 factory.go:221] Registration of the systemd container factory successfully
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900703    1525 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0501 04:17:00.162249    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.900931    1525 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.909390    1525 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: W0501 04:15:36.922744    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: E0501 04:15:36.923300    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961054    1525 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961177    1525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0501 04:17:00.162810    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.961311    1525 state_mem.go:36] "Initialized new in-memory state store"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962539    1525 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962613    1525 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.962649    1525 policy_none.go:49] "None policy: Start"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.965264    1525 reconciler.go:26] "Reconciler: start to sync state"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.981258    1525 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.991286    1525 state_mem.go:35] "Initializing new in-memory state store"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:36 multinode-289800 kubelet[1525]: I0501 04:15:36.994410    1525 state_mem.go:75] "Updated machine memory state"
	I0501 04:17:00.162960    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.001037    1525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0501 04:17:00.163094    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.005977    1525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0501 04:17:00.163094    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.012301    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.163154    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.018582    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0501 04:17:00.163154    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.020477    1525 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0501 04:17:00.163202    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.020620    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.163202    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.021548    1525 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-289800\" not found"
	I0501 04:17:00.163287    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022495    1525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0501 04:17:00.163287    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022690    1525 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0501 04:17:00.163287    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.022715    1525 kubelet.go:2337] "Starting kubelet main sync loop"
	I0501 04:17:00.163335    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.022919    1525 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.028696    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.028755    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.045316    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:17:00.163425    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:17:00.163516    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:17:00.163516    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:17:00.163598    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.102048    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="400ms"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.124062    1525 topology_manager.go:215] "Topology Admit Handler" podUID="44d7830a7c97b8c7e460c0508d02be4e" podNamespace="kube-system" podName="kube-scheduler-multinode-289800"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.125237    1525 topology_manager.go:215] "Topology Admit Handler" podUID="8b70cd8d31103a1cfca45e9856766786" podNamespace="kube-system" podName="kube-apiserver-multinode-289800"
	I0501 04:17:00.163644    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.126693    1525 topology_manager.go:215] "Topology Admit Handler" podUID="a17001fd2508d58fea9b1ae465b65254" podNamespace="kube-system" podName="kube-controller-manager-multinode-289800"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.129279    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b12e9024402f49cfac7440d6a2eaf42d" podNamespace="kube-system" podName="etcd-multinode-289800"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132159    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479b3ec741befe4b1eddeb02949bcd198e18fa7dc4c196283e811e273e4edcbd"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132205    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d509d032dc607c6f771d62e39b125d9ec4ef121fdbac0798c929fe3f1662c88"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132217    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df6ba73bcf683d21156e67827524b826f94059250b12cf08abd23da8345923a"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.132236    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a338ea43bd9b03a0a56c5b614e36fd54cdd707fb4c2f5819a814e4ffd9bdcb65"
	I0501 04:17:00.163742    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.139102    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f72a1c5b5cdd65332e27f08445a684fc2d2f586ab1b8a2fb2c5c0dfc02b71165"
	I0501 04:17:00.163865    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.158602    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737"
	I0501 04:17:00.163865    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.174190    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bb6a06ed527b42fe74673579e4a788915c66cd3717c52a344c73e0b7d12b34"
	I0501 04:17:00.163920    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.191042    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79bf9ebb58e36ddfba4654e8de212598f75bb256849f4fa384c80d54954f68f5"
	I0501 04:17:00.163920    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.208222    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf9e690eb533d1d1d65dee3905f907946c145ab490fd4e62c3d724a0ba12193"
	I0501 04:17:00.164008    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214646    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-ca-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164031    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214710    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-k8s-certs\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164186    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214752    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-kubeconfig\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164238    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214812    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.164238    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214855    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-data\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:17:00.164238    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214875    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44d7830a7c97b8c7e460c0508d02be4e-kubeconfig\") pod \"kube-scheduler-multinode-289800\" (UID: \"44d7830a7c97b8c7e460c0508d02be4e\") " pod="kube-system/kube-scheduler-multinode-289800"
	I0501 04:17:00.164346    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214899    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-ca-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.164346    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214925    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b70cd8d31103a1cfca45e9856766786-k8s-certs\") pod \"kube-apiserver-multinode-289800\" (UID: \"8b70cd8d31103a1cfca45e9856766786\") " pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.164346    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214950    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-flexvolume-dir\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164466    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214973    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a17001fd2508d58fea9b1ae465b65254-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-289800\" (UID: \"a17001fd2508d58fea9b1ae465b65254\") " pod="kube-system/kube-controller-manager-multinode-289800"
	I0501 04:17:00.164466    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.214994    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b12e9024402f49cfac7440d6a2eaf42d-etcd-certs\") pod \"etcd-multinode-289800\" (UID: \"b12e9024402f49cfac7440d6a2eaf42d\") " pod="kube-system/etcd-multinode-289800"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.222614    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.223837    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.227891    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9971ef577f2f8634ce17f0dd1b9640fcf2695833e8dc85607abd2a82571746b8"
	I0501 04:17:00.164562    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.504248    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="800ms"
	I0501 04:17:00.164714    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: I0501 04:15:37.625269    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.164714    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.625998    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.164714    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: W0501 04:15:37.852634    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164842    4352 command_runner.go:130] > May 01 04:15:37 multinode-289800 kubelet[1525]: E0501 04:15:37.852740    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164890    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.063749    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164890    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.063859    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164890    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.260487    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e076eed49263cec5b0b06bbaa425cab2bf4a4b0a05e6dfa37993b20dff5ed93"
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.306204    1525 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-289800?timeout=10s\": dial tcp 172.28.209.199:8443: connect: connection refused" interval="1.6s"
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.357883    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.357983    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-289800&limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.164992    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: W0501 04:15:38.424248    1525 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.165107    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.424377    1525 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.209.199:8443: connect: connection refused
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: I0501 04:15:38.428960    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:38 multinode-289800 kubelet[1525]: E0501 04:15:38.431040    1525 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.209.199:8443: connect: connection refused" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:40 multinode-289800 kubelet[1525]: I0501 04:15:40.032371    1525 kubelet_node_status.go:73] "Attempting to register node" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.639150    1525 kubelet_node_status.go:112] "Node was previously registered" node="multinode-289800"
	I0501 04:17:00.165164    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.640030    1525 kubelet_node_status.go:76] "Successfully registered node" node="multinode-289800"
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.642970    1525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.644297    1525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.646032    1525 setters.go:580] "Node became not ready" node="multinode-289800" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-01T04:15:42Z","lastTransitionTime":"2024-05-01T04:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0501 04:17:00.165264    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.869832    1525 apiserver.go:52] "Watching apiserver"
	I0501 04:17:00.165403    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875356    1525 topology_manager.go:215] "Topology Admit Handler" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8w9hq"
	I0501 04:17:00.165403    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875613    1525 topology_manager.go:215] "Topology Admit Handler" podUID="aba82e50-b8f8-40b4-b08a-6d045314d6b6" podNamespace="kube-system" podName="kube-proxy-bp9zx"
	I0501 04:17:00.165403    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875753    1525 topology_manager.go:215] "Topology Admit Handler" podUID="0b91b14d-bed3-4889-b193-db53daccd395" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9zrw"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.875936    1525 topology_manager.go:215] "Topology Admit Handler" podUID="72ef61d4-4437-40da-86e7-4d7eb386b6de" podNamespace="kube-system" podName="kindnet-vcxkr"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876061    1525 topology_manager.go:215] "Topology Admit Handler" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5" podNamespace="kube-system" podName="storage-provisioner"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.876192    1525 topology_manager.go:215] "Topology Admit Handler" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f" podNamespace="default" podName="busybox-fc5497c4f-cc6mk"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.876527    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.165536    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.877384    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:17:00.165670    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.878678    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:17:00.165670    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.879595    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.165753    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.884364    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.165753    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.910944    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0501 04:17:00.165796    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.938877    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-xtables-lock\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:17:00.165796    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939029    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8d2a827-d9a6-419a-a076-c7695a16a2b5-tmp\") pod \"storage-provisioner\" (UID: \"b8d2a827-d9a6-419a-a076-c7695a16a2b5\") " pod="kube-system/storage-provisioner"
	I0501 04:17:00.165796    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939149    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-xtables-lock\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:17:00.165930    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-cni-cfg\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:17:00.166010    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939318    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72ef61d4-4437-40da-86e7-4d7eb386b6de-lib-modules\") pod \"kindnet-vcxkr\" (UID: \"72ef61d4-4437-40da-86e7-4d7eb386b6de\") " pod="kube-system/kindnet-vcxkr"
	I0501 04:17:00.166010    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.939427    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aba82e50-b8f8-40b4-b08a-6d045314d6b6-lib-modules\") pod \"kube-proxy-bp9zx\" (UID: \"aba82e50-b8f8-40b4-b08a-6d045314d6b6\") " pod="kube-system/kube-proxy-bp9zx"
	I0501 04:17:00.166075    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940207    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166119    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940401    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440364296 +0000 UTC m=+6.726863016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166119    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940680    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166119    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.940822    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.440808324 +0000 UTC m=+6.727307144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.948736    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-289800"
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: I0501 04:15:42.958916    1525 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975690    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166216    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975737    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166348    4352 command_runner.go:130] > May 01 04:15:42 multinode-289800 kubelet[1525]: E0501 04:15:42.975832    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:43.475811436 +0000 UTC m=+6.762310156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166348    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.052812    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17e9f88f256f5527a6565eb2da75f63" path="/var/lib/kubelet/pods/c17e9f88f256f5527a6565eb2da75f63/volumes"
	I0501 04:17:00.166348    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.054400    1525 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7b6f2a7c826774b66af910f598e965" path="/var/lib/kubelet/pods/fc7b6f2a7c826774b66af910f598e965/volumes"
	I0501 04:17:00.166467    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170146    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-289800" podStartSLOduration=1.170112215 podStartE2EDuration="1.170112215s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.140058816 +0000 UTC m=+6.426557536" watchObservedRunningTime="2024-05-01 04:15:43.170112215 +0000 UTC m=+6.456610935"
	I0501 04:17:00.166467    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: I0501 04:15:43.170304    1525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-289800" podStartSLOduration=1.170298327 podStartE2EDuration="1.170298327s" podCreationTimestamp="2024-05-01 04:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-01 04:15:43.16893474 +0000 UTC m=+6.455433460" watchObservedRunningTime="2024-05-01 04:15:43.170298327 +0000 UTC m=+6.456797147"
	I0501 04:17:00.166467    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444132    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166574    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444229    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444209637 +0000 UTC m=+7.730708457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166574    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444591    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.166574    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.444633    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.444622763 +0000 UTC m=+7.731121483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.166726    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.544921    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166726    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545047    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166812    4352 command_runner.go:130] > May 01 04:15:43 multinode-289800 kubelet[1525]: E0501 04:15:43.545141    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:44.545110913 +0000 UTC m=+7.831609633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.166851    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.039213    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9055d30512df38a5bce19ed5afcfdc450a7bd87a1eb169342c8bc7a42e81666f"
	I0501 04:17:00.166851    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.378804    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bff4b6a8ae020fee0da9e1a818c4bac4d9a43a831eb7b5550b254c1f181ec7"
	I0501 04:17:00.166851    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.401946    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402229    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f79e484da66a15667f79326d8bae0a570ba551fd2e02926fd663a292f6b15752"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.402476    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-289800" podUID="96a8cf0b-45bc-4636-9264-a0da579b5fa8"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: I0501 04:15:44.403391    1525 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-289800" podUID="a1b99f2b-8aed-4037-956a-13bde4551a72"
	I0501 04:17:00.166953    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454688    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167068    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.454983    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.454902809 +0000 UTC m=+9.741401629 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167068    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455515    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167068    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.455560    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.45554895 +0000 UTC m=+9.742047670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167194    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555732    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167194    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555836    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167194    4352 command_runner.go:130] > May 01 04:15:44 multinode-289800 kubelet[1525]: E0501 04:15:44.555920    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:46.55587479 +0000 UTC m=+9.842373510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167326    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028227    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.167326    4352 command_runner.go:130] > May 01 04:15:45 multinode-289800 kubelet[1525]: E0501 04:15:45.028491    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.167392    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.023829    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.167392    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486637    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167432    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.486963    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.486942526 +0000 UTC m=+13.773441346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167432    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.488686    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167572    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.489077    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.488847647 +0000 UTC m=+13.775346467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167572    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587833    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167572    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.587977    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167653    4352 command_runner.go:130] > May 01 04:15:46 multinode-289800 kubelet[1525]: E0501 04:15:46.588185    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:50.588160623 +0000 UTC m=+13.874659443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.167716    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.027084    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.167716    4352 command_runner.go:130] > May 01 04:15:47 multinode-289800 kubelet[1525]: E0501 04:15:47.028397    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:48 multinode-289800 kubelet[1525]: E0501 04:15:48.022969    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.024347    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:49 multinode-289800 kubelet[1525]: E0501 04:15:49.025248    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.167814    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.024175    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.167950    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523387    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.167950    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.523508    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.523488538 +0000 UTC m=+21.809987358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.167950    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524104    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.168079    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.524150    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.524137716 +0000 UTC m=+21.810636436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.168079    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.624897    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168079    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625357    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168171    4352 command_runner.go:130] > May 01 04:15:50 multinode-289800 kubelet[1525]: E0501 04:15:50.625742    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:15:58.625719971 +0000 UTC m=+21.912218691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168215    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024464    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168215    4352 command_runner.go:130] > May 01 04:15:51 multinode-289800 kubelet[1525]: E0501 04:15:51.024959    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168215    4352 command_runner.go:130] > May 01 04:15:52 multinode-289800 kubelet[1525]: E0501 04:15:52.024016    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168306    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.023669    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168306    4352 command_runner.go:130] > May 01 04:15:53 multinode-289800 kubelet[1525]: E0501 04:15:53.024381    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168306    4352 command_runner.go:130] > May 01 04:15:54 multinode-289800 kubelet[1525]: E0501 04:15:54.023529    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168433    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.023399    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168433    4352 command_runner.go:130] > May 01 04:15:55 multinode-289800 kubelet[1525]: E0501 04:15:55.024039    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168433    4352 command_runner.go:130] > May 01 04:15:56 multinode-289800 kubelet[1525]: E0501 04:15:56.023961    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.024583    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:57 multinode-289800 kubelet[1525]: E0501 04:15:57.025562    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.024494    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.168545    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606520    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.168670    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.606584    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.606569125 +0000 UTC m=+37.893067945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.168670    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607052    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.168883    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.607095    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.607084827 +0000 UTC m=+37.893583547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.168925    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.707959    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.168925    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708171    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.169016    4352 command_runner.go:130] > May 01 04:15:58 multinode-289800 kubelet[1525]: E0501 04:15:58.708240    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:14.708221599 +0000 UTC m=+37.994720419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.169074    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.024158    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169074    4352 command_runner.go:130] > May 01 04:15:59 multinode-289800 kubelet[1525]: E0501 04:15:59.025055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169131    4352 command_runner.go:130] > May 01 04:16:00 multinode-289800 kubelet[1525]: E0501 04:16:00.023216    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169189    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.024905    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169229    4352 command_runner.go:130] > May 01 04:16:01 multinode-289800 kubelet[1525]: E0501 04:16:01.025585    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169229    4352 command_runner.go:130] > May 01 04:16:02 multinode-289800 kubelet[1525]: E0501 04:16:02.024143    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169229    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.023409    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169348    4352 command_runner.go:130] > May 01 04:16:03 multinode-289800 kubelet[1525]: E0501 04:16:03.024062    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169402    4352 command_runner.go:130] > May 01 04:16:04 multinode-289800 kubelet[1525]: E0501 04:16:04.023182    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169441    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.028055    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169441    4352 command_runner.go:130] > May 01 04:16:05 multinode-289800 kubelet[1525]: E0501 04:16:05.029254    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169531    4352 command_runner.go:130] > May 01 04:16:06 multinode-289800 kubelet[1525]: E0501 04:16:06.024522    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169587    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.024384    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169587    4352 command_runner.go:130] > May 01 04:16:07 multinode-289800 kubelet[1525]: E0501 04:16:07.025431    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169652    4352 command_runner.go:130] > May 01 04:16:08 multinode-289800 kubelet[1525]: E0501 04:16:08.024168    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169708    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.024117    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169750    4352 command_runner.go:130] > May 01 04:16:09 multinode-289800 kubelet[1525]: E0501 04:16:09.025560    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169750    4352 command_runner.go:130] > May 01 04:16:10 multinode-289800 kubelet[1525]: E0501 04:16:10.023881    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169750    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.023619    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169843    4352 command_runner.go:130] > May 01 04:16:11 multinode-289800 kubelet[1525]: E0501 04:16:11.024277    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.169843    4352 command_runner.go:130] > May 01 04:16:12 multinode-289800 kubelet[1525]: E0501 04:16:12.024236    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.169919    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023153    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.169964    4352 command_runner.go:130] > May 01 04:16:13 multinode-289800 kubelet[1525]: E0501 04:16:13.023926    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.170025    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.023335    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.170025    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657138    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.170089    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657461    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume podName:e3a349e9-97d8-4bba-8eac-deff1948600a nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.657440103 +0000 UTC m=+69.943938823 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3a349e9-97d8-4bba-8eac-deff1948600a-config-volume") pod "coredns-7db6d8ff4d-8w9hq" (UID: "e3a349e9-97d8-4bba-8eac-deff1948600a") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.170148    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657218    1525 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0501 04:17:00.170148    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.657858    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume podName:0b91b14d-bed3-4889-b193-db53daccd395 nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.65783162 +0000 UTC m=+69.944330440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b91b14d-bed3-4889-b193-db53daccd395-config-volume") pod "coredns-7db6d8ff4d-x9zrw" (UID: "0b91b14d-bed3-4889-b193-db53daccd395") : object "kube-system"/"coredns" not registered
	I0501 04:17:00.170210    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758303    1525 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.170210    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758421    1525 projected.go:200] Error preparing data for projected volume kube-api-access-4r64v for pod default/busybox-fc5497c4f-cc6mk: object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.170275    4352 command_runner.go:130] > May 01 04:16:14 multinode-289800 kubelet[1525]: E0501 04:16:14.758487    1525 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v podName:7f61e6ee-cf9a-4903-ba51-2a3b6804717f nodeName:}" failed. No retries permitted until 2024-05-01 04:16:46.758469083 +0000 UTC m=+70.044967903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4r64v" (UniqueName: "kubernetes.io/projected/7f61e6ee-cf9a-4903-ba51-2a3b6804717f-kube-api-access-4r64v") pod "busybox-fc5497c4f-cc6mk" (UID: "7f61e6ee-cf9a-4903-ba51-2a3b6804717f") : object "default"/"kube-root-ca.crt" not registered
	I0501 04:17:00.170337    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.023369    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-8w9hq" podUID="e3a349e9-97d8-4bba-8eac-deff1948600a"
	I0501 04:17:00.170398    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.024797    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-x9zrw" podUID="0b91b14d-bed3-4889-b193-db53daccd395"
	I0501 04:17:00.170398    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.886834    1525 scope.go:117] "RemoveContainer" containerID="ee2238f98e350e8d80528b60fc5b614ce6048d8b34af2034a9947e26d8e6beab"
	I0501 04:17:00.170460    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: I0501 04:16:15.887225    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:15 multinode-289800 kubelet[1525]: E0501 04:16:15.887510    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b8d2a827-d9a6-419a-a076-c7695a16a2b5)\"" pod="kube-system/storage-provisioner" podUID="b8d2a827-d9a6-419a-a076-c7695a16a2b5"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: E0501 04:16:16.024360    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-cc6mk" podUID="7f61e6ee-cf9a-4903-ba51-2a3b6804717f"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:16 multinode-289800 kubelet[1525]: I0501 04:16:16.618138    1525 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:29 multinode-289800 kubelet[1525]: I0501 04:16:29.024408    1525 scope.go:117] "RemoveContainer" containerID="01deddefba52af094ad6ca083f5c2bee336ace05959dec5d34b15a9ba6cc2539"
	I0501 04:17:00.170572    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.040204    1525 scope.go:117] "RemoveContainer" containerID="3244d1ee5ab428faf09a962609f2c940c36a998727a01b873d382eb5ee600ca3"
	I0501 04:17:00.170715    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	I0501 04:17:00.170715    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0501 04:17:00.170715    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0501 04:17:00.170780    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0501 04:17:00.170780    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0501 04:17:00.170780    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	I0501 04:17:00.170848    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.170848    4352 command_runner.go:130] > May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	I0501 04:17:00.170913    4352 command_runner.go:130] > May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	I0501 04:17:00.170913    4352 command_runner.go:130] > May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	I0501 04:17:00.226252    4352 logs.go:123] Gathering logs for kube-apiserver [18cd30f3ad28] ...
	I0501 04:17:00.226252    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd30f3ad28"
	I0501 04:17:00.270142    4352 command_runner.go:130] ! I0501 04:15:39.445795       1 options.go:221] external host was not specified, using 172.28.209.199
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:39.453956       1 server.go:148] Version: v1.30.0
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:39.454357       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:40.258184       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0501 04:17:00.271132    4352 command_runner.go:130] ! I0501 04:15:40.258591       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:17:00.271261    4352 command_runner.go:130] ! I0501 04:15:40.260085       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:40.260405       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:40.261810       1 instance.go:299] Using reconciler: lease
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:40.801281       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0501 04:17:00.271337    4352 command_runner.go:130] ! W0501 04:15:40.801386       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:41.090803       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:41.091252       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0501 04:17:00.271337    4352 command_runner.go:130] ! I0501 04:15:41.359171       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.532740       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.570911       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.571018       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.571046       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.571875       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.572053       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.573317       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.574692       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.574726       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! W0501 04:15:41.574734       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0501 04:17:00.271581    4352 command_runner.go:130] ! I0501 04:15:41.576633       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0501 04:17:00.271789    4352 command_runner.go:130] ! W0501 04:15:41.576726       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0501 04:17:00.271789    4352 command_runner.go:130] ! I0501 04:15:41.577645       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0501 04:17:00.271789    4352 command_runner.go:130] ! W0501 04:15:41.577739       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271789    4352 command_runner.go:130] ! W0501 04:15:41.577748       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.271868    4352 command_runner.go:130] ! I0501 04:15:41.578543       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0501 04:17:00.271868    4352 command_runner.go:130] ! W0501 04:15:41.578618       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271868    4352 command_runner.go:130] ! W0501 04:15:41.578731       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271942    4352 command_runner.go:130] ! I0501 04:15:41.579623       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0501 04:17:00.271942    4352 command_runner.go:130] ! I0501 04:15:41.582482       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0501 04:17:00.271942    4352 command_runner.go:130] ! W0501 04:15:41.582572       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.271942    4352 command_runner.go:130] ! W0501 04:15:41.582581       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272006    4352 command_runner.go:130] ! I0501 04:15:41.583284       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0501 04:17:00.272034    4352 command_runner.go:130] ! W0501 04:15:41.583417       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272034    4352 command_runner.go:130] ! W0501 04:15:41.583428       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272034    4352 command_runner.go:130] ! I0501 04:15:41.585084       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0501 04:17:00.272034    4352 command_runner.go:130] ! W0501 04:15:41.585203       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0501 04:17:00.272097    4352 command_runner.go:130] ! I0501 04:15:41.588956       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0501 04:17:00.272123    4352 command_runner.go:130] ! W0501 04:15:41.589055       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272123    4352 command_runner.go:130] ! W0501 04:15:41.589067       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272153    4352 command_runner.go:130] ! I0501 04:15:41.589951       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0501 04:17:00.272202    4352 command_runner.go:130] ! W0501 04:15:41.590056       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! W0501 04:15:41.590066       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! I0501 04:15:41.593577       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0501 04:17:00.272232    4352 command_runner.go:130] ! W0501 04:15:41.593674       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! W0501 04:15:41.593684       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272232    4352 command_runner.go:130] ! I0501 04:15:41.595694       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0501 04:17:00.272314    4352 command_runner.go:130] ! I0501 04:15:41.597680       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0501 04:17:00.272334    4352 command_runner.go:130] ! W0501 04:15:41.597864       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0501 04:17:00.272334    4352 command_runner.go:130] ! W0501 04:15:41.597875       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272397    4352 command_runner.go:130] ! I0501 04:15:41.603955       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0501 04:17:00.272425    4352 command_runner.go:130] ! W0501 04:15:41.604059       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.604069       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:41.607445       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.607533       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.607543       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:41.608797       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.608817       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:41.625599       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0501 04:17:00.272456    4352 command_runner.go:130] ! W0501 04:15:41.625618       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.332139       1 secure_serving.go:213] Serving securely on [::]:8443
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.332337       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.332595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.333006       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.333577       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.333909       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.334990       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335027       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335107       1 aggregator.go:163] waiting for initial CRD sync...
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335378       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335424       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335517       1 available_controller.go:423] Starting AvailableConditionController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335533       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.335556       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.337835       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.338196       1 controller.go:116] Starting legacy_token_tracking_controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.338360       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.338519       1 controller.go:78] Starting OpenAPI AggregationController
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.339167       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.339360       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.339853       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.361139       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.361155       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0501 04:17:00.272456    4352 command_runner.go:130] ! I0501 04:15:42.361192       1 controller.go:139] Starting OpenAPI controller
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361219       1 controller.go:87] Starting OpenAPI V3 controller
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361233       1 naming_controller.go:291] Starting NamingConditionController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361253       1 establishing_controller.go:76] Starting EstablishingController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361274       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361288       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.361301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.395816       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.396242       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 04:17:00.273005    4352 command_runner.go:130] ! I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:17:00.273132    4352 command_runner.go:130] ! I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:17:00.273132    4352 command_runner.go:130] ! I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:17:00.273132    4352 command_runner.go:130] ! I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:17:00.273187    4352 command_runner.go:130] ! I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:17:00.273187    4352 command_runner.go:130] ! I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:17:00.273187    4352 command_runner.go:130] ! I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:17:00.273229    4352 command_runner.go:130] ! I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0501 04:17:00.273363    4352 command_runner.go:130] ! W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:17:00.273363    4352 command_runner.go:130] ! I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:17:00.273478    4352 command_runner.go:130] ! I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0501 04:17:00.281481    4352 logs.go:123] Gathering logs for etcd [34892fdb6898] ...
	I0501 04:17:00.281481    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34892fdb6898"
	I0501 04:17:00.311627    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.997417Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:17:00.312604    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998475Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.209.199:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.209.199:2380","--initial-cluster=multinode-289800=https://172.28.209.199:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.209.199:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.209.199:2380","--name=multinode-289800","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0501 04:17:00.312604    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998558Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0501 04:17:00.312688    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:38.998588Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0501 04:17:00.312733    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998599Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.209.199:2380"]}
	I0501 04:17:00.312733    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:38.998626Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:17:00.312833    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.006405Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"]}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.007658Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-289800","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.030589Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.951987ms"}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.081537Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0501 04:17:00.312939    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104039Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","commit-index":2020}
	I0501 04:17:00.313075    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.104878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=()"}
	I0501 04:17:00.313075    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became follower at term 2"}
	I0501 04:17:00.313075    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.105519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fe483b81e7b7d166 [peers: [], term: 2, commit: 2020, applied: 0, lastindex: 2020, lastterm: 2]"}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"warn","ts":"2024-05-01T04:15:39.121672Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.127575Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1352}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.132217Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1744}
	I0501 04:17:00.313146    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.144206Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.15993Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"fe483b81e7b7d166","timeout":"7s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160468Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"fe483b81e7b7d166"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.160545Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fe483b81e7b7d166","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.16402Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.165851Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	I0501 04:17:00.313243    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	I0501 04:17:00.313485    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0501 04:17:00.313485    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0501 04:17:00.313612    4352 command_runner.go:130] ! {"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0501 04:17:00.321344    4352 logs.go:123] Gathering logs for coredns [b8a9b405d76b] ...
	I0501 04:17:00.321344    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8a9b405d76b"
	I0501 04:17:00.350423    4352 command_runner.go:130] > .:53
	I0501 04:17:00.350423    4352 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	I0501 04:17:00.350423    4352 command_runner.go:130] > CoreDNS-1.11.1
	I0501 04:17:00.350423    4352 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0501 04:17:00.350423    4352 command_runner.go:130] > [INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	I0501 04:17:00.351773    4352 logs.go:123] Gathering logs for kube-proxy [502684407b0c] ...
	I0501 04:17:00.351773    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 502684407b0c"
	I0501 04:17:00.379727    4352 command_runner.go:130] ! I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:17:00.380527    4352 command_runner.go:130] ! I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:17:00.382338    4352 command_runner.go:130] ! I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:17:00.383519    4352 command_runner.go:130] ! I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:17:00.384947    4352 logs.go:123] Gathering logs for kindnet [6d5f881ef398] ...
	I0501 04:17:00.384947    4352 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d5f881ef398"
	I0501 04:17:00.415306    4352 command_runner.go:130] ! I0501 04:01:59.122485       1 main.go:227] handling current node
	I0501 04:17:00.415306    4352 command_runner.go:130] ! I0501 04:01:59.122501       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418139    4352 command_runner.go:130] ! I0501 04:01:59.122510       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:01:59.122690       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:01:59.122722       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153658       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153775       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153793       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153946       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:09.153980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161031       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161061       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161079       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161177       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:19.161185       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181653       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181721       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181735       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.181742       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.182277       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:29.182369       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.195902       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196079       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196095       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196105       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196558       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:39.196649       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.209858       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.209973       1 main.go:227] handling current node
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.210027       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.418769    4352 command_runner.go:130] ! I0501 04:02:49.210041       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419327    4352 command_runner.go:130] ! I0501 04:02:49.210461       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419327    4352 command_runner.go:130] ! I0501 04:02:49.210617       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219550       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219615       1 main.go:227] handling current node
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219631       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.219638       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.220333       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419399    4352 command_runner.go:130] ! I0501 04:02:59.220436       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.231302       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.232437       1 main.go:227] handling current node
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.232648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419474    4352 command_runner.go:130] ! I0501 04:03:09.232851       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419566    4352 command_runner.go:130] ! I0501 04:03:09.233578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419621    4352 command_runner.go:130] ! I0501 04:03:09.233631       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419621    4352 command_runner.go:130] ! I0501 04:03:19.245975       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419621    4352 command_runner.go:130] ! I0501 04:03:19.246060       1 main.go:227] handling current node
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246081       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246386       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419673    4352 command_runner.go:130] ! I0501 04:03:19.246423       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419718    4352 command_runner.go:130] ! I0501 04:03:29.258941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419718    4352 command_runner.go:130] ! I0501 04:03:29.259020       1 main.go:227] handling current node
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259485       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419762    4352 command_runner.go:130] ! I0501 04:03:29.259520       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.269941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270129       1 main.go:227] handling current node
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270152       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270161       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419826    4352 command_runner.go:130] ! I0501 04:03:39.270403       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419907    4352 command_runner.go:130] ! I0501 04:03:39.270438       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419907    4352 command_runner.go:130] ! I0501 04:03:49.282880       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.419907    4352 command_runner.go:130] ! I0501 04:03:49.283025       1 main.go:227] handling current node
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283045       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283054       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283773       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:49.283792       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.419987    4352 command_runner.go:130] ! I0501 04:03:59.297110       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297155       1 main.go:227] handling current node
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297169       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297177       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297656       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420052    4352 command_runner.go:130] ! I0501 04:03:59.297688       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.310638       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.311476       1 main.go:227] handling current node
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.311969       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.312340       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420115    4352 command_runner.go:130] ! I0501 04:04:09.313291       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420178    4352 command_runner.go:130] ! I0501 04:04:09.313332       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420178    4352 command_runner.go:130] ! I0501 04:04:19.324939       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420178    4352 command_runner.go:130] ! I0501 04:04:19.325084       1 main.go:227] handling current node
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.325480       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.325493       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.325923       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420247    4352 command_runner.go:130] ! I0501 04:04:19.326083       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420299    4352 command_runner.go:130] ! I0501 04:04:29.332468       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420299    4352 command_runner.go:130] ! I0501 04:04:29.332576       1 main.go:227] handling current node
	I0501 04:17:00.420299    4352 command_runner.go:130] ! I0501 04:04:29.332619       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420371    4352 command_runner.go:130] ! I0501 04:04:29.332645       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420371    4352 command_runner.go:130] ! I0501 04:04:29.332818       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420447    4352 command_runner.go:130] ! I0501 04:04:29.332831       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420447    4352 command_runner.go:130] ! I0501 04:04:39.342867       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.342901       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.342914       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.342921       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.343433       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:39.343593       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364771       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364905       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364921       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.364930       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.365166       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:49.365205       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379243       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379352       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379369       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379377       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379531       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:04:59.379564       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.389743       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390518       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390622       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390636       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.390894       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:09.391049       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.400837       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401285       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401439       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401572       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.401956       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:19.402136       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422040       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422249       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422285       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422311       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422521       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:29.422723       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429807       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429856       1 main.go:227] handling current node
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429874       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.429881       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.430903       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.420512    4352 command_runner.go:130] ! I0501 04:05:39.431340       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445455       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445594       1 main.go:227] handling current node
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445610       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421059    4352 command_runner.go:130] ! I0501 04:05:49.445619       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:49.445751       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:49.445765       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461135       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461248       1 main.go:227] handling current node
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461264       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461273       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.461947       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:05:59.462094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:06:09.469509       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421122    4352 command_runner.go:130] ! I0501 04:06:09.469615       1 main.go:227] handling current node
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.469636       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.469646       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.470218       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:09.470387       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421237    4352 command_runner.go:130] ! I0501 04:06:19.486501       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486605       1 main.go:227] handling current node
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486621       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486629       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486864       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:19.486946       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:29.503311       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421302    4352 command_runner.go:130] ! I0501 04:06:29.503476       1 main.go:227] handling current node
	I0501 04:17:00.421392    4352 command_runner.go:130] ! I0501 04:06:29.503492       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421392    4352 command_runner.go:130] ! I0501 04:06:29.503503       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:29.503633       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:29.503843       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528749       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528837       1 main.go:227] handling current node
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528853       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421441    4352 command_runner.go:130] ! I0501 04:06:39.528861       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421548    4352 command_runner.go:130] ! I0501 04:06:39.529235       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421548    4352 command_runner.go:130] ! I0501 04:06:39.529373       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421548    4352 command_runner.go:130] ! I0501 04:06:49.535984       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421590    4352 command_runner.go:130] ! I0501 04:06:49.536067       1 main.go:227] handling current node
	I0501 04:17:00.421590    4352 command_runner.go:130] ! I0501 04:06:49.536082       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421590    4352 command_runner.go:130] ! I0501 04:06:49.536092       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421642    4352 command_runner.go:130] ! I0501 04:06:49.536689       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421642    4352 command_runner.go:130] ! I0501 04:06:49.536802       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421642    4352 command_runner.go:130] ! I0501 04:06:59.550480       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421687    4352 command_runner.go:130] ! I0501 04:06:59.551072       1 main.go:227] handling current node
	I0501 04:17:00.421687    4352 command_runner.go:130] ! I0501 04:06:59.551257       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421687    4352 command_runner.go:130] ! I0501 04:06:59.551358       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421731    4352 command_runner.go:130] ! I0501 04:06:59.551696       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421731    4352 command_runner.go:130] ! I0501 04:06:59.551781       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.421771    4352 command_runner.go:130] ! I0501 04:07:09.569460       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.421771    4352 command_runner.go:130] ! I0501 04:07:09.569627       1 main.go:227] handling current node
	I0501 04:17:00.421832    4352 command_runner.go:130] ! I0501 04:07:09.569642       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.421871    4352 command_runner.go:130] ! I0501 04:07:09.569651       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.421912    4352 command_runner.go:130] ! I0501 04:07:09.570296       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.421912    4352 command_runner.go:130] ! I0501 04:07:09.570434       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.422182    4352 command_runner.go:130] ! I0501 04:07:19.577507       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.422226    4352 command_runner.go:130] ! I0501 04:07:19.577599       1 main.go:227] handling current node
	I0501 04:17:00.422226    4352 command_runner.go:130] ! I0501 04:07:19.577615       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.422226    4352 command_runner.go:130] ! I0501 04:07:19.577730       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.422269    4352 command_runner.go:130] ! I0501 04:07:19.578102       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.422269    4352 command_runner.go:130] ! I0501 04:07:19.578208       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.422269    4352 command_runner.go:130] ! I0501 04:07:29.592703       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.422309    4352 command_runner.go:130] ! I0501 04:07:29.592845       1 main.go:227] handling current node
	I0501 04:17:00.422309    4352 command_runner.go:130] ! I0501 04:07:29.592861       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.427137    4352 command_runner.go:130] ! I0501 04:07:29.592869       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.427137    4352 command_runner.go:130] ! I0501 04:07:29.593139       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.427137    4352 command_runner.go:130] ! I0501 04:07:29.593174       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.427712    4352 command_runner.go:130] ! I0501 04:07:39.602034       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.427771    4352 command_runner.go:130] ! I0501 04:07:39.602064       1 main.go:227] handling current node
	I0501 04:17:00.427771    4352 command_runner.go:130] ! I0501 04:07:39.602077       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:39.602084       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:39.602283       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:39.602300       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:49.837563       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:49.837638       1 main.go:227] handling current node
	I0501 04:17:00.427814    4352 command_runner.go:130] ! I0501 04:07:49.837652       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:49.837660       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:49.837875       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:49.837955       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:59.851818       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:59.852109       1 main.go:227] handling current node
	I0501 04:17:00.428592    4352 command_runner.go:130] ! I0501 04:07:59.852127       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:07:59.852753       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:07:59.853129       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:07:59.853164       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860338       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860453       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860472       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860482       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.860626       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:09.861316       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877403       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877515       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877530       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877838       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:19.877874       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892899       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892926       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892937       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.892944       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.893106       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:29.893180       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901877       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901929       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901943       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.901951       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.902578       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:39.902678       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.918941       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919115       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919130       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919139       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919950       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:49.919968       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933101       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933154       1 main.go:227] handling current node
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933648       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.933667       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429180    4352 command_runner.go:130] ! I0501 04:08:59.934094       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:08:59.934127       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:09:09.948569       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:09:09.948615       1 main.go:227] handling current node
	I0501 04:17:00.429702    4352 command_runner.go:130] ! I0501 04:09:09.948629       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:09.948637       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:09.949057       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:09.949076       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958099       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958261       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958282       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958294       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.958880       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:19.959055       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975626       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975765       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975790       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.975803       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.976360       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:29.976488       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985296       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985455       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985488       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.985497       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.986552       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:39.986590       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.995944       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996021       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996036       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996044       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996649       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:09:49.996720       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003190       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003239       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003253       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003261       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003479       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:00.003516       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023328       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023430       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023445       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023460       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023613       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:10.023647       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030526       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030616       1 main.go:227] handling current node
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030632       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.429743    4352 command_runner.go:130] ! I0501 04:10:20.030641       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:20.030856       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:20.030980       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038164       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038263       1 main.go:227] handling current node
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038278       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038287       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430273    4352 command_runner.go:130] ! I0501 04:10:30.038931       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430386    4352 command_runner.go:130] ! I0501 04:10:30.039072       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430386    4352 command_runner.go:130] ! I0501 04:10:40.053866       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430428    4352 command_runner.go:130] ! I0501 04:10:40.053915       1 main.go:227] handling current node
	I0501 04:17:00.430428    4352 command_runner.go:130] ! I0501 04:10:40.053929       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430446    4352 command_runner.go:130] ! I0501 04:10:40.053936       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430481    4352 command_runner.go:130] ! I0501 04:10:40.054259       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430481    4352 command_runner.go:130] ! I0501 04:10:40.054295       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430481    4352 command_runner.go:130] ! I0501 04:10:50.066490       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430514    4352 command_runner.go:130] ! I0501 04:10:50.066542       1 main.go:227] handling current node
	I0501 04:17:00.430514    4352 command_runner.go:130] ! I0501 04:10:50.066560       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430565    4352 command_runner.go:130] ! I0501 04:10:50.066567       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430565    4352 command_runner.go:130] ! I0501 04:10:50.067066       1 main.go:223] Handling node with IPs: map[172.28.217.21:{}]
	I0501 04:17:00.430598    4352 command_runner.go:130] ! I0501 04:10:50.067210       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.2.0/24] 
	I0501 04:17:00.430598    4352 command_runner.go:130] ! I0501 04:11:00.075901       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430598    4352 command_runner.go:130] ! I0501 04:11:00.076052       1 main.go:227] handling current node
	I0501 04:17:00.430649    4352 command_runner.go:130] ! I0501 04:11:00.076069       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:00.076078       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:10.087907       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:10.088124       1 main.go:227] handling current node
	I0501 04:17:00.430681    4352 command_runner.go:130] ! I0501 04:11:10.088140       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430732    4352 command_runner.go:130] ! I0501 04:11:10.088148       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430732    4352 command_runner.go:130] ! I0501 04:11:10.088875       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.430766    4352 command_runner.go:130] ! I0501 04:11:10.088954       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.430766    4352 command_runner.go:130] ! I0501 04:11:10.089178       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.223.145 Flags: [] Table: 0} 
	I0501 04:17:00.430766    4352 command_runner.go:130] ! I0501 04:11:20.103399       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430817    4352 command_runner.go:130] ! I0501 04:11:20.103511       1 main.go:227] handling current node
	I0501 04:17:00.430817    4352 command_runner.go:130] ! I0501 04:11:20.103528       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430849    4352 command_runner.go:130] ! I0501 04:11:20.103538       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430849    4352 command_runner.go:130] ! I0501 04:11:20.103879       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.430896    4352 command_runner.go:130] ! I0501 04:11:20.103916       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.430896    4352 command_runner.go:130] ! I0501 04:11:30.114473       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.430896    4352 command_runner.go:130] ! I0501 04:11:30.115083       1 main.go:227] handling current node
	I0501 04:17:00.430926    4352 command_runner.go:130] ! I0501 04:11:30.115256       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.430974    4352 command_runner.go:130] ! I0501 04:11:30.115463       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.430974    4352 command_runner.go:130] ! I0501 04:11:30.116474       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.430974    4352 command_runner.go:130] ! I0501 04:11:30.116611       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431007    4352 command_runner.go:130] ! I0501 04:11:40.124324       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431007    4352 command_runner.go:130] ! I0501 04:11:40.124371       1 main.go:227] handling current node
	I0501 04:17:00.431057    4352 command_runner.go:130] ! I0501 04:11:40.124384       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431057    4352 command_runner.go:130] ! I0501 04:11:40.124392       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431090    4352 command_runner.go:130] ! I0501 04:11:40.124558       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431090    4352 command_runner.go:130] ! I0501 04:11:40.124570       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431090    4352 command_runner.go:130] ! I0501 04:11:50.138059       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431137    4352 command_runner.go:130] ! I0501 04:11:50.138102       1 main.go:227] handling current node
	I0501 04:17:00.431137    4352 command_runner.go:130] ! I0501 04:11:50.138116       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431137    4352 command_runner.go:130] ! I0501 04:11:50.138123       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:11:50.138826       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:11:50.138936       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155704       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155799       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155823       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.155832       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.156502       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:00.156549       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164706       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164754       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164767       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164774       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.164887       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:10.165094       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.178957       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179142       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179159       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179178       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179694       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:17:00.431170    4352 command_runner.go:130] ! I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:17:00.431757    4352 command_runner.go:130] ! I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:17:00.431757    4352 command_runner.go:130] ! I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:17:02.963180    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:17:02.963180    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:02.963180    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:02.963180    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:02.969021    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:17:02.969021    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Audit-Id: c0612144-a145-4879-a876-258e0bbd60ed
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:02.969021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:02.969021    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:02.969021    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:02 GMT
	I0501 04:17:02.971183    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1973","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 94403 chars]
	I0501 04:17:02.975122    4352 system_pods.go:59] 13 kube-system pods found
	I0501 04:17:02.975122    4352 system_pods.go:61] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "etcd-multinode-289800" [aaf534b6-9f4c-445d-afb9-bd225e1a77fd] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kindnet-4m5vg" [4d06e665-b4c1-40b9-bbb8-c35bfe35385e] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kindnet-gzz7p" [576f33f3-f244-48f0-ae69-30c8f38ed871] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-apiserver-multinode-289800" [0ee77673-e4b3-4fba-a855-ef6876337257] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-proxy-g8mbm" [ef0e1817-6682-4b8f-affa-c10021247006] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-proxy-rlzp8" [b37d8d5d-a7cb-4848-a8a2-11d9761e08d6] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running
	I0501 04:17:02.975122    4352 system_pods.go:61] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running
	I0501 04:17:02.975122    4352 system_pods.go:74] duration metric: took 3.9379401s to wait for pod list to return data ...
	I0501 04:17:02.975791    4352 default_sa.go:34] waiting for default service account to be created ...
	I0501 04:17:02.975791    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/default/serviceaccounts
	I0501 04:17:02.975791    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:02.975791    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:02.975791    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:02.979648    4352 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0501 04:17:02.979648    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:02 GMT
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Audit-Id: 6d591c6d-dd15-4103-bf92-e58d05b6d78b
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:02.979648    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:02.979648    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:02.979648    4352 round_trippers.go:580]     Content-Length: 262
	I0501 04:17:02.979648    4352 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b7dbf8d0-35c5-4373-a233-f0386cee7e97","resourceVersion":"307","creationTimestamp":"2024-05-01T03:52:28Z"}}]}
	I0501 04:17:02.980244    4352 default_sa.go:45] found service account: "default"
	I0501 04:17:02.980244    4352 default_sa.go:55] duration metric: took 4.4528ms for default service account to be created ...
	I0501 04:17:02.980244    4352 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 04:17:02.980244    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/namespaces/kube-system/pods
	I0501 04:17:02.980244    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:02.980803    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:02.980803    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:02.986165    4352 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0501 04:17:02.986165    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:02 GMT
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Audit-Id: 30e3890d-b5ac-488d-b5ea-eb7f08c28637
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:02.986680    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:02.986680    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:02.986680    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:02.988321    4352 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-8w9hq","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"e3a349e9-97d8-4bba-8eac-deff1948600a","resourceVersion":"1973","creationTimestamp":"2024-05-01T03:52:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"274d7347-a7ac-4976-95e5-2f1a95eac4c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-01T03:52:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"274d7347-a7ac-4976-95e5-2f1a95eac4c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 94403 chars]
	I0501 04:17:02.992887    4352 system_pods.go:86] 13 kube-system pods found
	I0501 04:17:02.992887    4352 system_pods.go:89] "coredns-7db6d8ff4d-8w9hq" [e3a349e9-97d8-4bba-8eac-deff1948600a] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "coredns-7db6d8ff4d-x9zrw" [0b91b14d-bed3-4889-b193-db53daccd395] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "etcd-multinode-289800" [aaf534b6-9f4c-445d-afb9-bd225e1a77fd] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kindnet-4m5vg" [4d06e665-b4c1-40b9-bbb8-c35bfe35385e] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kindnet-gzz7p" [576f33f3-f244-48f0-ae69-30c8f38ed871] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kindnet-vcxkr" [72ef61d4-4437-40da-86e7-4d7eb386b6de] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-apiserver-multinode-289800" [0ee77673-e4b3-4fba-a855-ef6876337257] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-controller-manager-multinode-289800" [fd3e5c6f-55cb-47c8-b0bc-c9b0dbe3b318] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-proxy-bp9zx" [aba82e50-b8f8-40b4-b08a-6d045314d6b6] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-proxy-g8mbm" [ef0e1817-6682-4b8f-affa-c10021247006] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-proxy-rlzp8" [b37d8d5d-a7cb-4848-a8a2-11d9761e08d6] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "kube-scheduler-multinode-289800" [c7518f03-993b-432f-b742-8805dd2167a7] Running
	I0501 04:17:02.992887    4352 system_pods.go:89] "storage-provisioner" [b8d2a827-d9a6-419a-a076-c7695a16a2b5] Running
	I0501 04:17:02.992887    4352 system_pods.go:126] duration metric: took 12.6433ms to wait for k8s-apps to be running ...
	I0501 04:17:02.992887    4352 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 04:17:03.009569    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 04:17:03.035497    4352 system_svc.go:56] duration metric: took 42.6094ms WaitForService to wait for kubelet
	I0501 04:17:03.035563    4352 kubeadm.go:576] duration metric: took 1m15.1371889s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 04:17:03.035563    4352 node_conditions.go:102] verifying NodePressure condition ...
	I0501 04:17:03.035739    4352 round_trippers.go:463] GET https://172.28.209.199:8443/api/v1/nodes
	I0501 04:17:03.035739    4352 round_trippers.go:469] Request Headers:
	I0501 04:17:03.035739    4352 round_trippers.go:473]     Accept: application/json, */*
	I0501 04:17:03.035739    4352 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0501 04:17:03.043312    4352 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0501 04:17:03.043312    4352 round_trippers.go:577] Response Headers:
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Date: Wed, 01 May 2024 04:17:03 GMT
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Audit-Id: 3cd0513f-9a98-436f-b810-e8270a9db104
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Cache-Control: no-cache, private
	I0501 04:17:03.043512    4352 round_trippers.go:580]     Content-Type: application/json
	I0501 04:17:03.043583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0efc65b5-b651-4f8a-960e-a2d7d21397d5
	I0501 04:17:03.043583    4352 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: bee1b29d-26be-4280-b85e-dad6f92447df
	I0501 04:17:03.044517    4352 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1995"},"items":[{"metadata":{"name":"multinode-289800","uid":"67633edf-4176-4c7e-b917-dd5653442344","resourceVersion":"1932","creationTimestamp":"2024-05-01T03:52:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-289800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"2c4eae41cda912e6a762d77f0d8868e00f97bb4e","minikube.k8s.io/name":"multinode-289800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_01T03_52_17_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I0501 04:17:03.045118    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:17:03.045118    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:17:03.045118    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:17:03.045118    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:17:03.045118    4352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:17:03.045118    4352 node_conditions.go:123] node cpu capacity is 2
	I0501 04:17:03.045118    4352 node_conditions.go:105] duration metric: took 9.5544ms to run NodePressure ...
	I0501 04:17:03.045118    4352 start.go:240] waiting for startup goroutines ...
	I0501 04:17:03.045118    4352 start.go:245] waiting for cluster config update ...
	I0501 04:17:03.045118    4352 start.go:254] writing updated cluster config ...
	I0501 04:17:03.048946    4352 out.go:177] 
	I0501 04:17:03.064002    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:17:03.064992    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:17:03.069994    4352 out.go:177] * Starting "multinode-289800-m02" worker node in "multinode-289800" cluster
	I0501 04:17:03.072962    4352 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:17:03.072962    4352 cache.go:56] Caching tarball of preloaded images
	I0501 04:17:03.073959    4352 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 04:17:03.073959    4352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 04:17:03.073959    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:17:03.075948    4352 start.go:360] acquireMachinesLock for multinode-289800-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:17:03.075948    4352 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-289800-m02"
	I0501 04:17:03.076949    4352 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:17:03.076949    4352 fix.go:54] fixHost starting: m02
	I0501 04:17:03.076949    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:05.266538    4352 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 04:17:05.266538    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:05.266661    4352 fix.go:112] recreateIfNeeded on multinode-289800-m02: state=Stopped err=<nil>
	W0501 04:17:05.266661    4352 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:17:05.272891    4352 out.go:177] * Restarting existing hyperv VM for "multinode-289800-m02" ...
	I0501 04:17:05.274991    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-289800-m02
	I0501 04:17:08.356496    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:08.356541    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:08.356541    4352 main.go:141] libmachine: Waiting for host to start...
	I0501 04:17:08.356772    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:10.621715    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:10.622481    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:10.622481    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:13.137660    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:13.137660    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:14.150865    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:16.373399    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:16.373521    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:16.373521    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:18.956162    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:18.956208    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:19.968527    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:22.203157    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:22.203443    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:22.203443    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:24.783022    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:24.783370    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:25.784603    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:27.977192    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:27.977192    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:27.977192    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:30.528434    4352 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:17:30.528434    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:31.528947    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:33.692945    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:33.692945    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:33.693188    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:36.307423    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:36.307423    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:36.310413    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:38.423733    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:38.424218    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:38.424218    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:41.027609    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:41.027885    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:41.027885    4352 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-289800\config.json ...
	I0501 04:17:41.030463    4352 machine.go:94] provisionDockerMachine start ...
	I0501 04:17:41.030463    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:43.178682    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:43.178682    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:43.179526    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:45.755095    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:45.755095    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:45.766496    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:17:45.767074    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:17:45.767074    4352 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:17:45.889237    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:17:45.889237    4352 buildroot.go:166] provisioning hostname "multinode-289800-m02"
	I0501 04:17:45.889237    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:47.993353    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:47.994359    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:47.994359    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:50.632507    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:50.632507    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:50.638929    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:17:50.638929    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:17:50.639461    4352 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-289800-m02 && echo "multinode-289800-m02" | sudo tee /etc/hostname
	I0501 04:17:50.804381    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-289800-m02
	
	I0501 04:17:50.804455    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:52.999660    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:52.999660    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:52.999660    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:17:55.607366    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:17:55.607366    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:55.614179    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:17:55.614851    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:17:55.614851    4352 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-289800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-289800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-289800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:17:55.774126    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:17:55.774217    4352 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:17:55.774289    4352 buildroot.go:174] setting up certificates
	I0501 04:17:55.774289    4352 provision.go:84] configureAuth start
	I0501 04:17:55.774289    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:17:57.918796    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:17:57.919487    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:17:57.919487    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:00.502473    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:00.502829    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:00.502892    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:02.590366    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:02.591070    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:02.591070    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:05.148233    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:05.148889    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:05.148889    4352 provision.go:143] copyHostCerts
	I0501 04:18:05.148975    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0501 04:18:05.149285    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 04:18:05.149285    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 04:18:05.149285    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 04:18:05.150759    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0501 04:18:05.150843    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 04:18:05.150843    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 04:18:05.151871    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 04:18:05.152603    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0501 04:18:05.153167    4352 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 04:18:05.153167    4352 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 04:18:05.153457    4352 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 04:18:05.154280    4352 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-289800-m02 san=[127.0.0.1 172.28.222.62 localhost minikube multinode-289800-m02]
	I0501 04:18:05.311191    4352 provision.go:177] copyRemoteCerts
	I0501 04:18:05.326386    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:18:05.326386    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:07.459388    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:07.459388    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:07.459610    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:10.046688    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:10.046688    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:10.047450    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:10.153058    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8266358s)
	I0501 04:18:10.153058    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0501 04:18:10.153702    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0501 04:18:10.208931    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0501 04:18:10.209465    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0501 04:18:10.262414    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0501 04:18:10.262974    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:18:10.313758    4352 provision.go:87] duration metric: took 14.5392904s to configureAuth
	I0501 04:18:10.313758    4352 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:18:10.314419    4352 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:18:10.314419    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:12.463097    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:12.463386    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:12.463530    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:15.036197    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:15.037213    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:15.044286    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:15.045018    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:15.045018    4352 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 04:18:15.170285    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 04:18:15.170285    4352 buildroot.go:70] root file system type: tmpfs
	I0501 04:18:15.170829    4352 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 04:18:15.170960    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:17.282755    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:17.282755    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:17.282850    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:19.942009    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:19.942009    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:19.948397    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:19.948883    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:19.949170    4352 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.209.199"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 04:18:20.116815    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.209.199
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 04:18:20.116815    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:22.267186    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:22.267186    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:22.267458    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:24.848863    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:24.849092    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:24.857212    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:24.858312    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:24.858312    4352 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 04:18:27.354933    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 04:18:27.355111    4352 machine.go:97] duration metric: took 46.3243011s to provisionDockerMachine
	I0501 04:18:27.355111    4352 start.go:293] postStartSetup for "multinode-289800-m02" (driver="hyperv")
	I0501 04:18:27.355193    4352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:18:27.369117    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:18:27.369117    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:29.459227    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:29.459227    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:29.459227    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:32.049572    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:32.049572    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:32.050853    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:32.164993    4352 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7958105s)
	I0501 04:18:32.180202    4352 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:18:32.189844    4352 command_runner.go:130] > NAME=Buildroot
	I0501 04:18:32.189844    4352 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0501 04:18:32.189844    4352 command_runner.go:130] > ID=buildroot
	I0501 04:18:32.189844    4352 command_runner.go:130] > VERSION_ID=2023.02.9
	I0501 04:18:32.189844    4352 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0501 04:18:32.189844    4352 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:18:32.189844    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:18:32.190501    4352 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:18:32.191295    4352 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:18:32.191433    4352 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> /etc/ssl/certs/142882.pem
	I0501 04:18:32.205615    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:18:32.224717    4352 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:18:32.279750    4352 start.go:296] duration metric: took 4.9246015s for postStartSetup
	I0501 04:18:32.279750    4352 fix.go:56] duration metric: took 1m29.2021323s for fixHost
	I0501 04:18:32.279750    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:34.339829    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:34.340734    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:34.340734    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:36.859295    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:36.859526    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:36.867040    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:36.867995    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:36.867995    4352 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 04:18:37.005661    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714537117.002132397
	
	I0501 04:18:37.005661    4352 fix.go:216] guest clock: 1714537117.002132397
	I0501 04:18:37.005661    4352 fix.go:229] Guest: 2024-05-01 04:18:37.002132397 +0000 UTC Remote: 2024-05-01 04:18:32.2797503 +0000 UTC m=+301.181982701 (delta=4.722382097s)
	I0501 04:18:37.005761    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:39.121420    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:39.121420    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:39.121420    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:41.677809    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:41.677809    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:41.685063    4352 main.go:141] libmachine: Using SSH client type: native
	I0501 04:18:41.685802    4352 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.62 22 <nil> <nil>}
	I0501 04:18:41.686407    4352 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714537117
	I0501 04:18:41.824590    4352 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:18:37 UTC 2024
	
	I0501 04:18:41.824787    4352 fix.go:236] clock set: Wed May  1 04:18:37 UTC 2024
	 (err=<nil>)
	I0501 04:18:41.824787    4352 start.go:83] releasing machines lock for "multinode-289800-m02", held for 1m38.7480982s
	I0501 04:18:41.825001    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:43.920486    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:43.920486    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:43.920486    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:46.459348    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:46.459592    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:46.462543    4352 out.go:177] * Found network options:
	I0501 04:18:46.465450    4352 out.go:177]   - NO_PROXY=172.28.209.199
	W0501 04:18:46.467850    4352 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 04:18:46.470364    4352 out.go:177]   - NO_PROXY=172.28.209.199
	W0501 04:18:46.472962    4352 proxy.go:119] fail to check proxy env: Error ip not in block
	W0501 04:18:46.474858    4352 proxy.go:119] fail to check proxy env: Error ip not in block
	I0501 04:18:46.477481    4352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:18:46.477481    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:46.492157    4352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0501 04:18:46.492157    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:18:48.655157    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:48.655436    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:48.655436    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:48.659774    4352 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:18:48.659774    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:48.659774    4352 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:18:51.367604    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:51.367604    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:51.371211    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:51.411664    4352 main.go:141] libmachine: [stdout =====>] : 172.28.222.62
	
	I0501 04:18:51.411707    4352 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:18:51.411762    4352 sshutil.go:53] new ssh client: &{IP:172.28.222.62 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:18:51.573801    4352 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0501 04:18:51.573801    4352 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0501 04:18:51.573801    4352 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0962825s)
	I0501 04:18:51.573801    4352 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0816061s)
	W0501 04:18:51.573929    4352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:18:51.593046    4352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:18:51.625364    4352 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0501 04:18:51.626023    4352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:18:51.626132    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:18:51.626337    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:18:51.675610    4352 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0501 04:18:51.693907    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 04:18:51.730463    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:18:51.750468    4352 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:18:51.765473    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:18:51.803530    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:18:51.840574    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:18:51.882587    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:18:51.924142    4352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:18:51.964422    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:18:52.005876    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:18:52.049028    4352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:18:52.091544    4352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:18:52.114703    4352 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0501 04:18:52.129876    4352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:18:52.167809    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:18:52.396374    4352 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:18:52.435618    4352 start.go:494] detecting cgroup driver to use...
	I0501 04:18:52.449268    4352 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:18:52.473108    4352 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0501 04:18:52.473108    4352 command_runner.go:130] > [Unit]
	I0501 04:18:52.473226    4352 command_runner.go:130] > Description=Docker Application Container Engine
	I0501 04:18:52.473226    4352 command_runner.go:130] > Documentation=https://docs.docker.com
	I0501 04:18:52.473296    4352 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0501 04:18:52.473296    4352 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0501 04:18:52.473296    4352 command_runner.go:130] > StartLimitBurst=3
	I0501 04:18:52.473296    4352 command_runner.go:130] > StartLimitIntervalSec=60
	I0501 04:18:52.473296    4352 command_runner.go:130] > [Service]
	I0501 04:18:52.473296    4352 command_runner.go:130] > Type=notify
	I0501 04:18:52.473296    4352 command_runner.go:130] > Restart=on-failure
	I0501 04:18:52.473368    4352 command_runner.go:130] > Environment=NO_PROXY=172.28.209.199
	I0501 04:18:52.473368    4352 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0501 04:18:52.473398    4352 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0501 04:18:52.473447    4352 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0501 04:18:52.473473    4352 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0501 04:18:52.473473    4352 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0501 04:18:52.473540    4352 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0501 04:18:52.473540    4352 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0501 04:18:52.473608    4352 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0501 04:18:52.473638    4352 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0501 04:18:52.473638    4352 command_runner.go:130] > ExecStart=
	I0501 04:18:52.473638    4352 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0501 04:18:52.473694    4352 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0501 04:18:52.473720    4352 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0501 04:18:52.473720    4352 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0501 04:18:52.473720    4352 command_runner.go:130] > LimitNOFILE=infinity
	I0501 04:18:52.473720    4352 command_runner.go:130] > LimitNPROC=infinity
	I0501 04:18:52.473720    4352 command_runner.go:130] > LimitCORE=infinity
	I0501 04:18:52.473720    4352 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0501 04:18:52.473720    4352 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0501 04:18:52.473775    4352 command_runner.go:130] > TasksMax=infinity
	I0501 04:18:52.473775    4352 command_runner.go:130] > TimeoutStartSec=0
	I0501 04:18:52.473775    4352 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0501 04:18:52.473802    4352 command_runner.go:130] > Delegate=yes
	I0501 04:18:52.473802    4352 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0501 04:18:52.473802    4352 command_runner.go:130] > KillMode=process
	I0501 04:18:52.473802    4352 command_runner.go:130] > [Install]
	I0501 04:18:52.473802    4352 command_runner.go:130] > WantedBy=multi-user.target
	I0501 04:18:52.486804    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:18:52.523439    4352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:18:52.573111    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:18:52.615120    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:18:52.660845    4352 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 04:18:52.723455    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:18:52.751408    4352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:18:52.793011    4352 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0501 04:18:52.806591    4352 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:18:52.812592    4352 command_runner.go:130] > /usr/bin/cri-dockerd
	I0501 04:18:52.826322    4352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:18:52.848919    4352 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:18:52.898955    4352 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:18:53.113927    4352 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:18:53.313445    4352 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:18:53.313510    4352 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:18:53.365106    4352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:18:53.575107    4352 ssh_runner.go:195] Run: sudo systemctl restart docker
	
	
	==> Docker <==
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:55 multinode-289800 dockerd[1045]: 2024/05/01 04:16:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:56 multinode-289800 dockerd[1045]: 2024/05/01 04:16:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:16:59 multinode-289800 dockerd[1045]: 2024/05/01 04:16:59 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:17:00 multinode-289800 dockerd[1045]: 2024/05/01 04:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:17:00 multinode-289800 dockerd[1045]: 2024/05/01 04:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:17:00 multinode-289800 dockerd[1045]: 2024/05/01 04:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:17:00 multinode-289800 dockerd[1045]: 2024/05/01 04:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 01 04:17:00 multinode-289800 dockerd[1045]: 2024/05/01 04:17:00 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1efd236274eb6       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   b85f507755ab5       busybox-fc5497c4f-cc6mk
	b8a9b405d76be       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   2c1e1e1d13f30       coredns-7db6d8ff4d-8w9hq
	8a0208aeafcfe       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   ba9a40d190b00       coredns-7db6d8ff4d-x9zrw
	239a5dfd3ae52       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   9055d30512df3       storage-provisioner
	b7cae3f6b88bc       4950bb10b3f87                                                                                         3 minutes ago       Running             kindnet-cni               1                   f79e484da66a1       kindnet-vcxkr
	01deddefba52a       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   9055d30512df3       storage-provisioner
	3efcc92f817ee       a0bf559e280cf                                                                                         3 minutes ago       Running             kube-proxy                1                   65bff4b6a8ae0       kube-proxy-bp9zx
	34892fdb68983       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   6e076eed49263       etcd-multinode-289800
	18cd30f3ad28f       c42f13656d0b2                                                                                         3 minutes ago       Running             kube-apiserver            0                   51e331e75da77       kube-apiserver-multinode-289800
	66a1b89e6733f       c7aad43836fa5                                                                                         3 minutes ago       Running             kube-controller-manager   1                   3fd53aa8d8f5d       kube-controller-manager-multinode-289800
	eaf69fce5ee36       259c8277fcbbc                                                                                         3 minutes ago       Running             kube-scheduler            1                   a8e27176eab83       kube-scheduler-multinode-289800
	237d3dab2c4e1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   79bf9ebb58e36       busybox-fc5497c4f-cc6mk
	15c4496e3a9f0       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   baf9e690eb533       coredns-7db6d8ff4d-x9zrw
	3e8d5ff9a9e4a       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   9d509d032dc60       coredns-7db6d8ff4d-8w9hq
	6d5f881ef3987       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Exited              kindnet-cni               0                   4df6ba73bcf68       kindnet-vcxkr
	502684407b0cf       a0bf559e280cf                                                                                         27 minutes ago      Exited              kube-proxy                0                   79bb6a06ed527       kube-proxy-bp9zx
	4b62556f40bec       c7aad43836fa5                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   f72a1c5b5cdd6       kube-controller-manager-multinode-289800
	06f1f84bfde17       259c8277fcbbc                                                                                         27 minutes ago      Exited              kube-scheduler            0                   479b3ec741bef       kube-scheduler-multinode-289800
	
	
	==> coredns [15c4496e3a9f] <==
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39552 - 50904 "HINFO IN 5304382971668517624.9064195615153089880. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.182051644s
	[INFO] 10.244.0.4:36718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000271601s
	[INFO] 10.244.0.4:43708 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.179550625s
	[INFO] 10.244.1.2:58483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144401s
	[INFO] 10.244.1.2:60628 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000807s
	[INFO] 10.244.0.4:37023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037009067s
	[INFO] 10.244.0.4:35134 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257602s
	[INFO] 10.244.0.4:42831 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000330103s
	[INFO] 10.244.0.4:35030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223102s
	[INFO] 10.244.1.2:54088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207601s
	[INFO] 10.244.1.2:39978 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230801s
	[INFO] 10.244.1.2:55944 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162801s
	[INFO] 10.244.1.2:53350 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088901s
	[INFO] 10.244.0.4:33705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251702s
	[INFO] 10.244.0.4:58457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202201s
	[INFO] 10.244.1.2:55547 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117201s
	[INFO] 10.244.1.2:52015 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146501s
	[INFO] 10.244.0.4:59703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247901s
	[INFO] 10.244.0.4:43545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196701s
	[INFO] 10.244.1.2:36180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000726s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3e8d5ff9a9e4] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47823 - 12804 "HINFO IN 6026210510891441927.5093937837002421400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.138242746s
	[INFO] 10.244.0.4:41822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.208275106s
	[INFO] 10.244.0.4:42126 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.044846324s
	[INFO] 10.244.1.2:55497 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000133701s
	[INFO] 10.244.1.2:47095 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000068901s
	[INFO] 10.244.0.4:34122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000644805s
	[INFO] 10.244.0.4:46878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252202s
	[INFO] 10.244.0.4:40098 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136701s
	[INFO] 10.244.0.4:35873 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.03321874s
	[INFO] 10.244.1.2:36243 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.016690721s
	[INFO] 10.244.1.2:38582 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000648s
	[INFO] 10.244.1.2:43903 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106801s
	[INFO] 10.244.1.2:34736 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102201s
	[INFO] 10.244.0.4:54471 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213002s
	[INFO] 10.244.0.4:34585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000266702s
	[INFO] 10.244.1.2:55135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142801s
	[INFO] 10.244.1.2:53626 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000732s
	[INFO] 10.244.0.4:57975 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000425703s
	[INFO] 10.244.0.4:51644 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121401s
	[INFO] 10.244.1.2:42930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000246601s
	[INFO] 10.244.1.2:59495 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199302s
	[INFO] 10.244.1.2:34672 - 5 "PTR IN 1.208.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000155401s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8a0208aeafcf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52159 - 35492 "HINFO IN 5750380281790413371.3552283498234348593. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042351696s
	
	
	==> coredns [b8a9b405d76b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 93f4fe825695a41d5751760d66496ca9593f0bf8b9ec4239a6fbc33c30b6f070cfe5a066c5977052ab0ea80abbafa6083758e385108cb741ae48dc4b780ff0f9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40469 - 32708 "HINFO IN 1085250392681766432.1461243850492468212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.135567722s
	
	
	==> describe nodes <==
	Name:               multinode-289800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-289800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-289800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_01T03_52_17_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:52:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-289800
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 04:19:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 03:52:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 May 2024 04:16:16 +0000   Wed, 01 May 2024 04:16:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.209.199
	  Hostname:    multinode-289800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f135d6c1a75448b6b1c169fdf59297ca
	  System UUID:                3951d3b5-ddd4-174a-8cfe-7f86ac2b780b
	  Boot ID:                    e7d6b770-0c88-4d74-8b75-d55dec0d45be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cc6mk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-8w9hq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 coredns-7db6d8ff4d-x9zrw                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-multinode-289800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-vcxkr                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-apiserver-multinode-289800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-multinode-289800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-bp9zx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-multinode-289800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 27m                  kube-proxy       
	  Normal  Starting                 3m52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  27m                  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientMemory  27m                  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                  node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	  Normal  NodeReady                26m                  kubelet          Node multinode-289800 status is now: NodeReady
	  Normal  Starting                 4m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m2s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m2s)  kubelet          Node multinode-289800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m2s)  kubelet          Node multinode-289800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m43s                node-controller  Node multinode-289800 event: Registered Node multinode-289800 in Controller
	
	
	Name:               multinode-289800-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-289800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-289800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T03_55_27_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 03:55:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-289800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 04:12:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 04:11:48 +0000   Wed, 01 May 2024 04:16:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.219.162
	  Hostname:    multinode-289800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 076f7b95819747b9b94c7306ec3a1144
	  System UUID:                a38b9d92-b32b-ca41-91ed-de4d374d0e70
	  Boot ID:                    c2ea27f4-2800-46b2-ab1f-c82bf0989c34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tbxxx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-gzz7p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-rlzp8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-289800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-289800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-289800-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m43s              node-controller  Node multinode-289800-m02 event: Registered Node multinode-289800-m02 in Controller
	  Normal  NodeNotReady             3m3s               node-controller  Node multinode-289800-m02 status is now: NodeNotReady
	
	
	Name:               multinode-289800-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-289800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e
	                    minikube.k8s.io/name=multinode-289800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_01T04_11_04_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 May 2024 04:11:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-289800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 May 2024 04:12:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 01 May 2024 04:11:11 +0000   Wed, 01 May 2024 04:12:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.223.145
	  Hostname:    multinode-289800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7516764892cf41608a001e00e0cc7bb8
	  System UUID:                dc77ee49-027d-ec48-b8b1-154ba9e0a06a
	  Boot ID:                    bc9f9fd7-7b85-42f6-abac-952a5e1b37b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4m5vg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-g8mbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-289800-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  8m34s (x2 over 8m34s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x2 over 8m34s)  kubelet          Node multinode-289800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x2 over 8m34s)  kubelet          Node multinode-289800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m29s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	  Normal  NodeReady                8m27s                  kubelet          Node multinode-289800-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m49s                  node-controller  Node multinode-289800-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m43s                  node-controller  Node multinode-289800-m03 event: Registered Node multinode-289800-m03 in Controller
	
	
	==> dmesg <==
	[  +5.683380] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[May 1 04:14] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.282885] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.215175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +49.815364] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.200985] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[May 1 04:15] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	[  +0.127967] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.582263] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +0.225161] systemd-fstab-generator[1023]: Ignoring "noauto" option for root device
	[  +0.250911] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +3.012463] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.224116] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.208959] systemd-fstab-generator[1250]: Ignoring "noauto" option for root device
	[  +0.295566] systemd-fstab-generator[1265]: Ignoring "noauto" option for root device
	[  +0.942002] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.104482] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.196160] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +1.305789] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.930267] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.234940] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	[  +7.700271] kauditd_printk_skb: 70 callbacks suppressed
	[May 1 04:17] hrtimer: interrupt took 612617 ns
	
	
	==> etcd [34892fdb6898] <==
	{"level":"info","ts":"2024-05-01T04:15:39.166004Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T04:15:39.166021Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T04:15:39.169808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 switched to configuration voters=(18322960513081266534)"}
	{"level":"info","ts":"2024-05-01T04:15:39.1699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","added-peer-id":"fe483b81e7b7d166","added-peer-peer-urls":["https://172.28.209.152:2380"]}
	{"level":"info","ts":"2024-05-01T04:15:39.172064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d720844a1e03b483","local-member-id":"fe483b81e7b7d166","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T04:15:39.172365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T04:15:39.184058Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T04:15:39.184564Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fe483b81e7b7d166","initial-advertise-peer-urls":["https://172.28.209.199:2380"],"listen-peer-urls":["https://172.28.209.199:2380"],"advertise-client-urls":["https://172.28.209.199:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.209.199:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T04:15:39.184741Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T04:15:39.185843Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.209.199:2380"}
	{"level":"info","ts":"2024-05-01T04:15:39.185973Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.209.199:2380"}
	{"level":"info","ts":"2024-05-01T04:15:40.708419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T04:15:40.70848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T04:15:40.708514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgPreVoteResp from fe483b81e7b7d166 at term 2"}
	{"level":"info","ts":"2024-05-01T04:15:40.70853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T04:15:40.708552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 received MsgVoteResp from fe483b81e7b7d166 at term 3"}
	{"level":"info","ts":"2024-05-01T04:15:40.708562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fe483b81e7b7d166 became leader at term 3"}
	{"level":"info","ts":"2024-05-01T04:15:40.708576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fe483b81e7b7d166 elected leader fe483b81e7b7d166 at term 3"}
	{"level":"info","ts":"2024-05-01T04:15:40.716912Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fe483b81e7b7d166","local-member-attributes":"{Name:multinode-289800 ClientURLs:[https://172.28.209.199:2379]}","request-path":"/0/members/fe483b81e7b7d166/attributes","cluster-id":"d720844a1e03b483","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T04:15:40.717064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T04:15:40.724343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T04:15:40.729592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.209.199:2379"}
	{"level":"info","ts":"2024-05-01T04:15:40.730744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T04:15:40.731057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T04:15:40.732147Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 04:19:38 up 5 min,  0 users,  load average: 0.42, 0.36, 0.17
	Linux multinode-289800 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d5f881ef398] <==
	I0501 04:12:20.179871       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:12:30.195829       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:12:30.196251       1 main.go:227] handling current node
	I0501 04:12:30.196390       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:12:30.196494       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:12:30.197097       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:12:30.197115       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:12:40.209828       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:12:40.210095       1 main.go:227] handling current node
	I0501 04:12:40.210203       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:12:40.210235       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:12:40.210464       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:12:40.210571       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:12:50.223457       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:12:50.224132       1 main.go:227] handling current node
	I0501 04:12:50.224156       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:12:50.224167       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:12:50.224602       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:12:50.224704       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:13:00.241709       1 main.go:223] Handling node with IPs: map[172.28.209.152:{}]
	I0501 04:13:00.241841       1 main.go:227] handling current node
	I0501 04:13:00.242114       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:13:00.242393       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:13:00.242840       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:13:00.242886       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b7cae3f6b88b] <==
	I0501 04:18:56.030666       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:19:06.038978       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:19:06.039059       1 main.go:227] handling current node
	I0501 04:19:06.039073       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:19:06.039328       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:19:06.039692       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:19:06.039721       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:19:16.049883       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:19:16.049933       1 main.go:227] handling current node
	I0501 04:19:16.049947       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:19:16.049955       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:19:16.050406       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:19:16.050503       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:19:26.066603       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:19:26.066666       1 main.go:227] handling current node
	I0501 04:19:26.066680       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:19:26.066687       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:19:26.067439       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:19:26.067543       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	I0501 04:19:36.081848       1 main.go:223] Handling node with IPs: map[172.28.209.199:{}]
	I0501 04:19:36.081893       1 main.go:227] handling current node
	I0501 04:19:36.081963       1 main.go:223] Handling node with IPs: map[172.28.219.162:{}]
	I0501 04:19:36.081978       1 main.go:250] Node multinode-289800-m02 has CIDR [10.244.1.0/24] 
	I0501 04:19:36.082271       1 main.go:223] Handling node with IPs: map[172.28.223.145:{}]
	I0501 04:19:36.082289       1 main.go:250] Node multinode-289800-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [18cd30f3ad28] <==
	I0501 04:15:42.496145       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0501 04:15:42.510644       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0501 04:15:42.510702       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0501 04:15:42.510859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0501 04:15:42.518082       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0501 04:15:42.518718       1 aggregator.go:165] initial CRD sync complete...
	I0501 04:15:42.518822       1 autoregister_controller.go:141] Starting autoregister controller
	I0501 04:15:42.518833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0501 04:15:42.518839       1 cache.go:39] Caches are synced for autoregister controller
	I0501 04:15:42.535654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0501 04:15:42.538744       1 shared_informer.go:320] Caches are synced for configmaps
	I0501 04:15:42.553249       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0501 04:15:42.558886       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0501 04:15:42.560982       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0501 04:15:42.561020       1 policy_source.go:224] refreshing policies
	I0501 04:15:42.641630       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0501 04:15:43.354880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0501 04:15:43.981051       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.209.199]
	I0501 04:15:43.982709       1 controller.go:615] quota admission added evaluator for: endpoints
	I0501 04:15:44.022518       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0501 04:15:45.344677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0501 04:15:45.642753       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0501 04:15:45.672938       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0501 04:15:45.801984       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0501 04:15:45.823813       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4b62556f40be] <==
	I0501 03:52:43.686562       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0501 03:55:27.159233       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m02\" does not exist"
	I0501 03:55:27.216693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m02" podCIDRs=["10.244.1.0/24"]
	I0501 03:55:28.718620       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m02"
	I0501 03:55:50.611680       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 03:56:17.356814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.46504ms"
	I0501 03:56:17.371366       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.143719ms"
	I0501 03:56:17.372124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.3µs"
	I0501 03:56:17.379164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.7µs"
	I0501 03:56:19.725403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.097702ms"
	I0501 03:56:19.728196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.611719ms"
	I0501 03:56:19.839218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.233167ms"
	I0501 03:56:19.839355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.1µs"
	I0501 04:00:13.644614       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:00:13.644755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:00:13.661934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.2.0/24"]
	I0501 04:00:13.802230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-289800-m03"
	I0501 04:00:36.640421       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:08:13.948279       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:10:57.898286       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:11:04.117706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:11:04.120427       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-289800-m03\" does not exist"
	I0501 04:11:04.128942       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-289800-m03" podCIDRs=["10.244.3.0/24"]
	I0501 04:11:11.358226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:12:49.097072       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	
	
	==> kube-controller-manager [66a1b89e6733] <==
	I0501 04:15:55.562434       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0501 04:15:55.574228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 04:15:55.576283       1 shared_informer.go:320] Caches are synced for disruption
	I0501 04:15:55.610948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.488314ms"
	I0501 04:15:55.611568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.799µs"
	I0501 04:15:55.619708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.171745ms"
	I0501 04:15:55.620238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.596µs"
	I0501 04:15:55.628824       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 04:15:55.650837       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:15:55.657374       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 04:15:55.685503       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 04:15:55.700006       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 04:15:56.136638       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:15:56.136685       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0501 04:15:56.152886       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 04:16:16.638494       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-289800-m02"
	I0501 04:16:35.670965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.004646ms"
	I0501 04:16:35.674472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.702µs"
	I0501 04:16:49.079199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="127.703µs"
	I0501 04:16:49.148697       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.735082ms"
	I0501 04:16:49.149307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.503µs"
	I0501 04:16:49.187683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.244247ms"
	I0501 04:16:49.188221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.9µs"
	I0501 04:16:49.221273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="25.255693ms"
	I0501 04:16:49.221694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.902µs"
	
	
	==> kube-proxy [3efcc92f817e] <==
	I0501 04:15:45.132138       1 server_linux.go:69] "Using iptables proxy"
	I0501 04:15:45.231202       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.199"]
	I0501 04:15:45.502838       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 04:15:45.506945       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 04:15:45.506980       1 server_linux.go:165] "Using iptables Proxier"
	I0501 04:15:45.527138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 04:15:45.530735       1 server.go:872] "Version info" version="v1.30.0"
	I0501 04:15:45.530796       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:15:45.533247       1 config.go:192] "Starting service config controller"
	I0501 04:15:45.547850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 04:15:45.533551       1 config.go:101] "Starting endpoint slice config controller"
	I0501 04:15:45.549105       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 04:15:45.550003       1 config.go:319] "Starting node config controller"
	I0501 04:15:45.550016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 04:15:45.650245       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 04:15:45.650488       1 shared_informer.go:320] Caches are synced for node config
	I0501 04:15:45.650691       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [502684407b0c] <==
	I0501 03:52:31.254714       1 server_linux.go:69] "Using iptables proxy"
	I0501 03:52:31.309383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.209.152"]
	I0501 03:52:31.368810       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 03:52:31.368955       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 03:52:31.368982       1 server_linux.go:165] "Using iptables Proxier"
	I0501 03:52:31.375383       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 03:52:31.376367       1 server.go:872] "Version info" version="v1.30.0"
	I0501 03:52:31.376406       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 03:52:31.379637       1 config.go:192] "Starting service config controller"
	I0501 03:52:31.380342       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 03:52:31.380587       1 config.go:101] "Starting endpoint slice config controller"
	I0501 03:52:31.380650       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 03:52:31.383140       1 config.go:319] "Starting node config controller"
	I0501 03:52:31.383173       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 03:52:31.480698       1 shared_informer.go:320] Caches are synced for service config
	I0501 03:52:31.481316       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 03:52:31.483428       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [06f1f84bfde1] <==
	E0501 03:52:13.194526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0501 03:52:13.234721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0501 03:52:13.235310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0501 03:52:13.292208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0501 03:52:13.292830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0501 03:52:13.389881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 03:52:13.390057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0501 03:52:13.433548       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 03:52:13.433622       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 03:52:13.511617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0501 03:52:13.511761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0501 03:52:13.522760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0501 03:52:13.522812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0501 03:52:13.723200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0501 03:52:13.723365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0501 03:52:13.767195       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0501 03:52:13.767262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0501 03:52:13.799936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0501 03:52:13.799967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0501 03:52:13.840187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0501 03:52:13.840304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0501 03:52:13.853401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0501 03:52:13.853454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0501 03:52:16.553388       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0501 04:13:09.401188       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eaf69fce5ee3] <==
	I0501 04:15:39.300694       1 serving.go:380] Generated self-signed cert in-memory
	W0501 04:15:42.419811       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0501 04:15:42.419988       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0501 04:15:42.420417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0501 04:15:42.420580       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0501 04:15:42.513199       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0501 04:15:42.513509       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 04:15:42.517575       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0501 04:15:42.517756       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 04:15:42.519360       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0501 04:15:42.519606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0501 04:15:42.619527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.057362    1525 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:16:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:16:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:16:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:16:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 04:16:37 multinode-289800 kubelet[1525]: I0501 04:16:37.089866    1525 scope.go:117] "RemoveContainer" containerID="bbbe9bf276852c1e75b7b472a87e95dcf9a0871f6273a4c312d445eb91dfe06d"
	May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204127    1525 kuberuntime_manager.go:1450] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" podSandboxID="976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	May 01 04:16:37 multinode-289800 kubelet[1525]: E0501 04:16:37.204257    1525 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 976a9ff433ccbe7ec8d65d4f2797a7885430504d883ab65524674ad5ab989737" pod="kube-system/kube-apiserver-multinode-289800"
	May 01 04:16:47 multinode-289800 kubelet[1525]: I0501 04:16:47.967198    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1e1e1d13f303dcd2ce93f0a883ff4415e684c864a3974a393b2aaba3328348"
	May 01 04:16:48 multinode-289800 kubelet[1525]: I0501 04:16:48.001452    1525 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba9a40d190b009b916e22db66996ed829a6cc973db25f55dae89d747629a546b"
	May 01 04:17:37 multinode-289800 kubelet[1525]: E0501 04:17:37.061630    1525 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:17:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:17:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:17:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:17:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 04:18:37 multinode-289800 kubelet[1525]: E0501 04:18:37.055536    1525 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:18:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:18:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:18:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:18:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 01 04:19:37 multinode-289800 kubelet[1525]: E0501 04:19:37.055373    1525 iptables.go:577] "Could not set up iptables canary" err=<
	May 01 04:19:37 multinode-289800 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 01 04:19:37 multinode-289800 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 01 04:19:37 multinode-289800 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 01 04:19:37 multinode-289800 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:19:27.424911   13388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-289800 -n multinode-289800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-289800 -n multinode-289800: (12.352167s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-289800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (488.48s)

                                                
                                    
x
+
TestPreload (578.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-597400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0501 04:23:38.032894   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 04:26:34.997286   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-597400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m32.1932537s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-597400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-597400 image pull gcr.io/k8s-minikube/busybox: (8.5630668s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-597400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-597400: (39.3685495s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-597400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0501 04:28:21.286659   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 04:28:38.037617   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p test-preload-597400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: exit status 90 (3m4.5304864s)

                                                
                                                
-- stdout --
	* [test-preload-597400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the hyperv driver based on existing profile
	* Starting "test-preload-597400" primary control-plane node in "test-preload-597400" cluster
	* Downloading Kubernetes v1.24.4 preload ...
	* Restarting existing hyperv VM for "test-preload-597400" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:27:43.404640    8504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 04:27:43.485837    8504 out.go:291] Setting OutFile to fd 984 ...
	I0501 04:27:43.486520    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:27:43.486520    8504 out.go:304] Setting ErrFile to fd 788...
	I0501 04:27:43.486520    8504 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:27:43.509163    8504 out.go:298] Setting JSON to false
	I0501 04:27:43.513416    8504 start.go:129] hostinfo: {"hostname":"minikube6","uptime":110717,"bootTime":1714426945,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 04:27:43.513416    8504 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 04:27:43.597022    8504 out.go:177] * [test-preload-597400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 04:27:43.748480    8504 notify.go:220] Checking for updates...
	I0501 04:27:43.797578    8504 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:27:43.947160    8504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:27:44.094388    8504 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 04:27:44.210360    8504 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:27:44.341766    8504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:27:44.459178    8504 config.go:182] Loaded profile config "test-preload-597400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0501 04:27:44.556405    8504 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0501 04:27:44.638214    8504 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:27:50.243333    8504 out.go:177] * Using the hyperv driver based on existing profile
	I0501 04:27:50.401862    8504 start.go:297] selected driver: hyperv
	I0501 04:27:50.401862    8504 start.go:901] validating driver "hyperv" against &{Name:test-preload-597400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-597400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.212.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:27:50.402183    8504 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:27:50.461199    8504 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 04:27:50.461199    8504 cni.go:84] Creating CNI manager for ""
	I0501 04:27:50.461199    8504 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 04:27:50.461199    8504 start.go:340] cluster config:
	{Name:test-preload-597400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-597400 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.212.230 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:27:50.461199    8504 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:27:50.597365    8504 out.go:177] * Starting "test-preload-597400" primary control-plane node in "test-preload-597400" cluster
	I0501 04:27:50.748615    8504 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0501 04:27:50.792648    8504 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0501 04:27:50.792749    8504 cache.go:56] Caching tarball of preloaded images
	I0501 04:27:50.792947    8504 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0501 04:27:50.814975    8504 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0501 04:27:50.856378    8504 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0501 04:27:50.929530    8504 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4?checksum=md5:20cbd62a1b5d1968f21881a4a0f4f59e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4
	I0501 04:27:55.891206    8504 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0501 04:27:55.892192    8504 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.4-docker-overlay2-amd64.tar.lz4 ...
	I0501 04:27:57.028052    8504 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on docker
	I0501 04:27:57.028281    8504 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\test-preload-597400\config.json ...
	I0501 04:27:57.030149    8504 start.go:360] acquireMachinesLock for test-preload-597400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:27:57.031063    8504 start.go:364] duration metric: took 913.7µs to acquireMachinesLock for "test-preload-597400"
	I0501 04:27:57.031063    8504 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:27:57.031063    8504 fix.go:54] fixHost starting: 
	I0501 04:27:57.031063    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:27:59.693591    8504 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 04:27:59.693664    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:27:59.693664    8504 fix.go:112] recreateIfNeeded on test-preload-597400: state=Stopped err=<nil>
	W0501 04:27:59.693664    8504 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:27:59.713458    8504 out.go:177] * Restarting existing hyperv VM for "test-preload-597400" ...
	I0501 04:27:59.797912    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM test-preload-597400
	I0501 04:28:03.008336    8504 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:28:03.008336    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:03.008336    8504 main.go:141] libmachine: Waiting for host to start...
	I0501 04:28:03.008336    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:05.274455    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:05.275393    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:05.275497    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:07.819127    8504 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:28:07.819185    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:08.823505    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:10.961146    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:10.961146    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:10.961146    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:13.503956    8504 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:28:13.503956    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:14.509143    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:16.680837    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:16.680837    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:16.680945    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:19.205143    8504 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:28:19.205340    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:20.220419    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:22.411480    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:22.411626    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:22.411687    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:24.901372    8504 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:28:24.901576    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:25.913834    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:28.036785    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:28.036785    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:28.036858    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:30.576179    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:30.576179    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:30.580254    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:32.671495    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:32.671495    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:32.671495    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:35.202030    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:35.202803    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:35.203050    8504 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\test-preload-597400\config.json ...
	I0501 04:28:35.205851    8504 machine.go:94] provisionDockerMachine start ...
	I0501 04:28:35.206190    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:37.313232    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:37.313232    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:37.313232    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:39.885595    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:39.886331    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:39.895922    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:28:39.896075    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:28:39.896075    8504 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:28:40.020176    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:28:40.020376    8504 buildroot.go:166] provisioning hostname "test-preload-597400"
	I0501 04:28:40.020501    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:42.076110    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:42.076988    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:42.076988    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:44.564137    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:44.564137    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:44.571808    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:28:44.571995    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:28:44.571995    8504 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-597400 && echo "test-preload-597400" | sudo tee /etc/hostname
	I0501 04:28:44.724193    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-597400
	
	I0501 04:28:44.724193    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:46.795130    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:46.795190    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:46.795190    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:49.345517    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:49.345576    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:49.350598    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:28:49.351325    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:28:49.351325    8504 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-597400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-597400/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-597400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:28:49.503194    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:28:49.503194    8504 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:28:49.503194    8504 buildroot.go:174] setting up certificates
	I0501 04:28:49.503194    8504 provision.go:84] configureAuth start
	I0501 04:28:49.503194    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:51.623013    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:51.624072    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:51.624151    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:54.222177    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:54.222345    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:54.222404    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:28:56.260712    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:28:56.260712    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:56.260712    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:28:58.727949    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:28:58.727949    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:28:58.727949    8504 provision.go:143] copyHostCerts
	I0501 04:28:58.727949    8504 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 04:28:58.727949    8504 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 04:28:58.728796    8504 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 04:28:58.730338    8504 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 04:28:58.730338    8504 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 04:28:58.730747    8504 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 04:28:58.732085    8504 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 04:28:58.732085    8504 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 04:28:58.732389    8504 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 04:28:58.733379    8504 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.test-preload-597400 san=[127.0.0.1 172.28.215.255 localhost minikube test-preload-597400]
	I0501 04:28:58.921394    8504 provision.go:177] copyRemoteCerts
	I0501 04:28:58.933463    8504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:28:58.933463    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:01.025372    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:01.025457    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:01.025536    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:03.562696    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:03.562696    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:03.563701    8504 sshutil.go:53] new ssh client: &{IP:172.28.215.255 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-597400\id_rsa Username:docker}
	I0501 04:29:03.666911    8504 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7334118s)
	I0501 04:29:03.667659    8504 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:29:03.717907    8504 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0501 04:29:03.772638    8504 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 04:29:03.823896    8504 provision.go:87] duration metric: took 14.3205928s to configureAuth
	I0501 04:29:03.823896    8504 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:29:03.824631    8504 config.go:182] Loaded profile config "test-preload-597400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.4
	I0501 04:29:03.824631    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:05.894020    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:05.894424    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:05.894424    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:08.441338    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:08.441518    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:08.448060    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:29:08.448060    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:29:08.448639    8504 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 04:29:08.588313    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 04:29:08.588475    8504 buildroot.go:70] root file system type: tmpfs
	I0501 04:29:08.588683    8504 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 04:29:08.588683    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:10.691384    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:10.691384    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:10.691501    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:13.252172    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:13.252172    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:13.259256    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:29:13.259842    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:29:13.259992    8504 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 04:29:13.416955    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 04:29:13.416955    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:15.460190    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:15.460190    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:15.460594    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:18.012597    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:18.012597    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:18.019483    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:29:18.019483    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:29:18.020063    8504 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 04:29:20.584791    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0501 04:29:20.584976    8504 machine.go:97] duration metric: took 45.3785789s to provisionDockerMachine
	I0501 04:29:20.584976    8504 start.go:293] postStartSetup for "test-preload-597400" (driver="hyperv")
	I0501 04:29:20.585079    8504 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:29:20.598769    8504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:29:20.598769    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:22.705897    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:22.706100    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:22.706186    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:25.243023    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:25.243461    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:25.243548    8504 sshutil.go:53] new ssh client: &{IP:172.28.215.255 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-597400\id_rsa Username:docker}
	I0501 04:29:25.349580    8504 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7507758s)
	I0501 04:29:25.366468    8504 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:29:25.374229    8504 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:29:25.374353    8504 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:29:25.375068    8504 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:29:25.377135    8504 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:29:25.393090    8504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:29:25.414088    8504 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:29:25.464174    8504 start.go:296] duration metric: took 4.8791606s for postStartSetup
	I0501 04:29:25.464174    8504 fix.go:56] duration metric: took 1m28.4324387s for fixHost
	I0501 04:29:25.464174    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:27.549926    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:27.550839    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:27.550902    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:30.060162    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:30.060316    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:30.067002    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:29:30.067689    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:29:30.067689    8504 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 04:29:30.210130    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714537770.213720702
	
	I0501 04:29:30.210130    8504 fix.go:216] guest clock: 1714537770.213720702
	I0501 04:29:30.210130    8504 fix.go:229] Guest: 2024-05-01 04:29:30.213720702 +0000 UTC Remote: 2024-05-01 04:29:25.4641741 +0000 UTC m=+102.159951101 (delta=4.749546602s)
	I0501 04:29:30.210130    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:32.272005    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:32.272005    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:32.272005    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:34.768038    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:34.769142    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:34.777033    8504 main.go:141] libmachine: Using SSH client type: native
	I0501 04:29:34.777596    8504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.215.255 22 <nil> <nil>}
	I0501 04:29:34.777596    8504 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714537770
	I0501 04:29:34.918037    8504 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:29:30 UTC 2024
	
	I0501 04:29:34.918121    8504 fix.go:236] clock set: Wed May  1 04:29:30 UTC 2024
	 (err=<nil>)
	I0501 04:29:34.918121    8504 start.go:83] releasing machines lock for "test-preload-597400", held for 1m37.8863141s
	I0501 04:29:34.918353    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:36.962510    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:36.962510    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:36.963227    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:39.458969    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:39.458969    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:39.464883    8504 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:29:39.464883    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:39.476616    8504 ssh_runner.go:195] Run: cat /version.json
	I0501 04:29:39.477613    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM test-preload-597400 ).state
	I0501 04:29:41.595448    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:41.595448    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:41.595539    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:41.596488    8504 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:29:41.596672    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:41.596759    8504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM test-preload-597400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:29:44.233646    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:44.233646    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:44.234208    8504 sshutil.go:53] new ssh client: &{IP:172.28.215.255 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-597400\id_rsa Username:docker}
	I0501 04:29:44.259693    8504 main.go:141] libmachine: [stdout =====>] : 172.28.215.255
	
	I0501 04:29:44.260395    8504 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:29:44.260788    8504 sshutil.go:53] new ssh client: &{IP:172.28.215.255 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\test-preload-597400\id_rsa Username:docker}
	I0501 04:29:44.429848    8504 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9649271s)
	I0501 04:29:44.429848    8504 ssh_runner.go:235] Completed: cat /version.json: (4.9531938s)
	I0501 04:29:44.444470    8504 ssh_runner.go:195] Run: systemctl --version
	I0501 04:29:44.469490    8504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 04:29:44.477812    8504 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:29:44.491210    8504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 04:29:44.520375    8504 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:29:44.520375    8504 start.go:494] detecting cgroup driver to use...
	I0501 04:29:44.520772    8504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:29:44.569092    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0501 04:29:44.607514    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:29:44.631647    8504 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:29:44.645322    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:29:44.680328    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:29:44.717901    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:29:44.754245    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:29:44.790169    8504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:29:44.827972    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:29:44.861175    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:29:44.896787    8504 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:29:44.932486    8504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:29:44.967899    8504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:29:45.003893    8504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:29:45.243007    8504 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:29:45.282007    8504 start.go:494] detecting cgroup driver to use...
	I0501 04:29:45.296861    8504 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:29:45.337611    8504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:29:45.378023    8504 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:29:45.430511    8504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:29:45.475968    8504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:29:45.518233    8504 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 04:29:45.593424    8504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:29:45.630298    8504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:29:45.687610    8504 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:29:45.708486    8504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:29:45.728845    8504 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:29:45.785154    8504 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:29:46.006989    8504 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:29:46.224450    8504 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:29:46.224725    8504 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:29:46.280173    8504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:29:46.524810    8504 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 04:30:47.671926    8504 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1466508s)
	I0501 04:30:47.686455    8504 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0501 04:30:47.722936    8504 out.go:177] 
	W0501 04:30:47.724999    8504 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 04:29:18 test-preload-597400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:29:18 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:18.709600651Z" level=info msg="Starting up"
	May 01 04:29:18 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:18.711129973Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:29:18 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:18.712738597Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.757156151Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.791703259Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.791875461Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.792057564Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.792145165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.793446584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.793526586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.793876791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794022093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794104494Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794126494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794888706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.795792219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799497473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799648076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799846079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799890679Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.800450887Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.800651790Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.800751792Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821037290Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821174092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821199793Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821231493Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821250393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821378795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821835002Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822008005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822239908Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822263608Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822279209Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822429411Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822688015Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822719615Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822738315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822760116Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822779416Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822793316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822817417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822833317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822853717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822869617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822883218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822897018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822909618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822923018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822936618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822959219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822982819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822997819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823011719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823028520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823050120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823063520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823082020Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823132721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823156022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823168222Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823178922Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823270323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823338924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823407025Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823765430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823955233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.824076635Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.824118436Z" level=info msg="containerd successfully booted in 0.069965s"
	May 01 04:29:19 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:19.771694969Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:29:19 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:19.908939562Z" level=info msg="Loading containers: start."
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.406277673Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.501162838Z" level=info msg="Loading containers: done."
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.528353972Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.529138082Z" level=info msg="Daemon has completed initialization"
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.586404086Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:29:20 test-preload-597400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.588896416Z" level=info msg="API listen on [::]:2376"
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.553375823Z" level=info msg="Processing signal 'terminated'"
	May 01 04:29:46 test-preload-597400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.555926222Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.556465921Z" level=info msg="Daemon shutdown complete"
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.556663321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.556977121Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:29:47 test-preload-597400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:29:47 test-preload-597400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:29:47 test-preload-597400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:29:47 test-preload-597400 dockerd[1050]: time="2024-05-01T04:29:47.638060722Z" level=info msg="Starting up"
	May 01 04:30:47 test-preload-597400 dockerd[1050]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 04:30:47 test-preload-597400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 04:30:47 test-preload-597400 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 04:30:47 test-preload-597400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 04:29:18 test-preload-597400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:29:18 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:18.709600651Z" level=info msg="Starting up"
	May 01 04:29:18 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:18.711129973Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:29:18 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:18.712738597Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.757156151Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.791703259Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.791875461Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.792057564Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.792145165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.793446584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.793526586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.793876791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794022093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794104494Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794126494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.794888706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.795792219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799497473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799648076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799846079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.799890679Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.800450887Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.800651790Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.800751792Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821037290Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821174092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821199793Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821231493Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821250393Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821378795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.821835002Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822008005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822239908Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822263608Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822279209Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822429411Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822688015Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822719615Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822738315Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822760116Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822779416Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822793316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822817417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822833317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822853717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822869617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822883218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822897018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822909618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822923018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822936618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822959219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822982819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.822997819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823011719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823028520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823050120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823063520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823082020Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823132721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823156022Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823168222Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823178922Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823270323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823338924Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823407025Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823765430Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.823955233Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.824076635Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:29:18 test-preload-597400 dockerd[669]: time="2024-05-01T04:29:18.824118436Z" level=info msg="containerd successfully booted in 0.069965s"
	May 01 04:29:19 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:19.771694969Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:29:19 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:19.908939562Z" level=info msg="Loading containers: start."
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.406277673Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.501162838Z" level=info msg="Loading containers: done."
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.528353972Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.529138082Z" level=info msg="Daemon has completed initialization"
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.586404086Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:29:20 test-preload-597400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:29:20 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:20.588896416Z" level=info msg="API listen on [::]:2376"
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.553375823Z" level=info msg="Processing signal 'terminated'"
	May 01 04:29:46 test-preload-597400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.555926222Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.556465921Z" level=info msg="Daemon shutdown complete"
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.556663321Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:29:46 test-preload-597400 dockerd[662]: time="2024-05-01T04:29:46.556977121Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:29:47 test-preload-597400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:29:47 test-preload-597400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:29:47 test-preload-597400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:29:47 test-preload-597400 dockerd[1050]: time="2024-05-01T04:29:47.638060722Z" level=info msg="Starting up"
	May 01 04:30:47 test-preload-597400 dockerd[1050]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 04:30:47 test-preload-597400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 04:30:47 test-preload-597400 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 04:30:47 test-preload-597400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0501 04:30:47.725998    8504 out.go:239] * 
	* 
	W0501 04:30:47.727753    8504 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 04:30:47.732039    8504 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:68: out/minikube-windows-amd64.exe start -p test-preload-597400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv failed: exit status 90
panic.go:626: *** TestPreload FAILED at 2024-05-01 04:30:47.9423614 +0000 UTC m=+8512.298174501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-597400 -n test-preload-597400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p test-preload-597400 -n test-preload-597400: exit status 6 (12.0860104s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:30:48.096771    3272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0501 04:30:59.957621    3272 status.go:417] kubeconfig endpoint: get endpoint: "test-preload-597400" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-597400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-597400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-597400
E0501 04:31:35.003266   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-597400: (1m1.7570529s)
--- FAIL: TestPreload (578.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (1600.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m50.9107258s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-195400
E0501 04:43:38.046321   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-195400: (40.7548133s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-195400 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-195400 status --format={{.Host}}: exit status 7 (2.4793138s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:44:04.974142     800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0501 04:45:01.302753   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (7m44.0197773s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-195400 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (325.5566ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-195400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:51:51.672317   14324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-195400
	    minikube start -p kubernetes-upgrade-195400 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1954002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-195400 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0501 04:53:38.049856   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (8m4.2891184s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-195400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-195400" primary control-plane node in "kubernetes-upgrade-195400" cluster
	* Updating the running hyperv "kubernetes-upgrade-195400" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:51:52.005420    6468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 04:51:52.099380    6468 out.go:291] Setting OutFile to fd 1900 ...
	I0501 04:51:52.100471    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:51:52.100542    6468 out.go:304] Setting ErrFile to fd 1904...
	I0501 04:51:52.100542    6468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:51:52.125741    6468 out.go:298] Setting JSON to false
	I0501 04:51:52.130498    6468 start.go:129] hostinfo: {"hostname":"minikube6","uptime":112166,"bootTime":1714426945,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 04:51:52.130498    6468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 04:51:52.134134    6468 out.go:177] * [kubernetes-upgrade-195400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 04:51:52.136975    6468 notify.go:220] Checking for updates...
	I0501 04:51:52.141296    6468 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:51:52.147958    6468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:51:52.155049    6468 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 04:51:52.157974    6468 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:51:52.160357    6468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:51:52.163864    6468 config.go:182] Loaded profile config "kubernetes-upgrade-195400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:51:52.164539    6468 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 04:51:57.741152    6468 out.go:177] * Using the hyperv driver based on existing profile
	I0501 04:51:57.744417    6468 start.go:297] selected driver: hyperv
	I0501 04:51:57.744417    6468 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-195400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-195400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.213.192 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:51:57.744417    6468 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:51:57.797386    6468 cni.go:84] Creating CNI manager for ""
	I0501 04:51:57.797386    6468 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 04:51:57.797963    6468 start.go:340] cluster config:
	{Name:kubernetes-upgrade-195400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-195400 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.213.192 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 04:51:57.798222    6468 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:51:57.801711    6468 out.go:177] * Starting "kubernetes-upgrade-195400" primary control-plane node in "kubernetes-upgrade-195400" cluster
	I0501 04:51:57.806403    6468 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 04:51:57.806403    6468 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 04:51:57.806403    6468 cache.go:56] Caching tarball of preloaded images
	I0501 04:51:57.806403    6468 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 04:51:57.807076    6468 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0501 04:51:57.807202    6468 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-195400\config.json ...
	I0501 04:51:57.809033    6468 start.go:360] acquireMachinesLock for kubernetes-upgrade-195400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:57:26.148201    6468 start.go:364] duration metric: took 5m28.3366212s to acquireMachinesLock for "kubernetes-upgrade-195400"
	I0501 04:57:26.148530    6468 start.go:96] Skipping create...Using existing machine configuration
	I0501 04:57:26.148530    6468 fix.go:54] fixHost starting: 
	I0501 04:57:26.149467    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:28.392218    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:28.392218    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:28.392346    6468 fix.go:112] recreateIfNeeded on kubernetes-upgrade-195400: state=Running err=<nil>
	W0501 04:57:28.392346    6468 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 04:57:28.396376    6468 out.go:177] * Updating the running hyperv "kubernetes-upgrade-195400" VM ...
	I0501 04:57:28.401682    6468 machine.go:94] provisionDockerMachine start ...
	I0501 04:57:28.401682    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:30.652662    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:30.653147    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:30.653395    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:57:33.389217    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:57:33.389316    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:33.398437    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:57:33.399212    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:57:33.399212    6468 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:57:33.541839    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-195400
	
	I0501 04:57:33.541926    6468 buildroot.go:166] provisioning hostname "kubernetes-upgrade-195400"
	I0501 04:57:33.542435    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:35.787837    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:35.787837    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:35.788524    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:57:38.321904    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:57:38.321904    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:38.329489    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:57:38.330254    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:57:38.330254    6468 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-195400 && echo "kubernetes-upgrade-195400" | sudo tee /etc/hostname
	I0501 04:57:38.495791    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-195400
	
	I0501 04:57:38.495907    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:40.579123    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:40.579123    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:40.579123    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:57:43.271234    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:57:43.271234    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:43.277233    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:57:43.277233    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:57:43.278220    6468 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-195400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-195400/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-195400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:57:43.419452    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:57:43.419551    6468 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:57:43.419690    6468 buildroot.go:174] setting up certificates
	I0501 04:57:43.419943    6468 provision.go:84] configureAuth start
	I0501 04:57:43.420010    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:45.755969    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:45.756179    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:45.756179    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:57:48.387468    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:57:48.387468    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:48.387587    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:50.593239    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:50.593239    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:50.593352    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:57:53.291824    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:57:53.292666    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:53.292666    6468 provision.go:143] copyHostCerts
	I0501 04:57:53.292774    6468 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 04:57:53.292774    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 04:57:53.293410    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 04:57:53.294938    6468 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 04:57:53.294998    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 04:57:53.295107    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 04:57:53.296767    6468 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 04:57:53.296767    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 04:57:53.296767    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 04:57:53.298136    6468 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-195400 san=[127.0.0.1 172.28.213.192 kubernetes-upgrade-195400 localhost minikube]
	I0501 04:57:53.499938    6468 provision.go:177] copyRemoteCerts
	I0501 04:57:53.516304    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 04:57:53.516845    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:57:55.666445    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:57:55.666445    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:55.666720    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:57:58.295808    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:57:58.295808    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:57:58.295808    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:57:58.410801    6468 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.894461s)
	I0501 04:57:58.411178    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 04:57:58.471829    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0501 04:57:58.534507    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 04:57:58.587184    6468 provision.go:87] duration metric: took 15.1671281s to configureAuth
	I0501 04:57:58.587184    6468 buildroot.go:189] setting minikube options for container-runtime
	I0501 04:57:58.588146    6468 config.go:182] Loaded profile config "kubernetes-upgrade-195400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:57:58.588146    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:00.825209    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:00.825209    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:00.825325    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:03.409770    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:03.410203    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:03.417981    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:03.418634    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:03.418634    6468 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0501 04:58:03.561035    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0501 04:58:03.561035    6468 buildroot.go:70] root file system type: tmpfs
	I0501 04:58:03.561035    6468 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0501 04:58:03.561612    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:05.775918    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:05.775918    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:05.775918    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:08.551455    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:08.552415    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:08.558331    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:08.559694    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:08.559694    6468 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0501 04:58:08.726548    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0501 04:58:08.726548    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:10.914475    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:10.914475    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:10.915549    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:13.588944    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:13.589007    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:13.594841    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:13.595388    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:13.595388    6468 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0501 04:58:13.751179    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:58:13.751179    6468 machine.go:97] duration metric: took 45.3491611s to provisionDockerMachine
	I0501 04:58:13.751179    6468 start.go:293] postStartSetup for "kubernetes-upgrade-195400" (driver="hyperv")
	I0501 04:58:13.751179    6468 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 04:58:13.770561    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 04:58:13.770561    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:15.989994    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:15.989994    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:15.990664    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:18.753023    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:18.753299    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:18.753299    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:58:18.875315    6468 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1046863s)
	I0501 04:58:18.893481    6468 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:58:18.901035    6468 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:58:18.901035    6468 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:58:18.901035    6468 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:58:18.901975    6468 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:58:18.916856    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:58:18.936576    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:58:18.997168    6468 start.go:296] duration metric: took 5.2459505s for postStartSetup
	I0501 04:58:18.997709    6468 fix.go:56] duration metric: took 52.848247s for fixHost
	I0501 04:58:18.997884    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:21.300038    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:21.300903    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:21.301180    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:24.082610    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:24.082610    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:24.089466    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:24.089630    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:24.089630    6468 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0501 04:58:24.226744    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714539504.227006498
	
	I0501 04:58:24.227459    6468 fix.go:216] guest clock: 1714539504.227006498
	I0501 04:58:24.227459    6468 fix.go:229] Guest: 2024-05-01 04:58:24.227006498 +0000 UTC Remote: 2024-05-01 04:58:18.9978528 +0000 UTC m=+387.104099801 (delta=5.229153698s)
	I0501 04:58:24.227459    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:26.729368    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:26.729673    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:26.729673    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:29.519104    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:29.520081    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:29.526050    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:29.526502    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:29.526617    6468 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714539504
	I0501 04:58:29.689275    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:58:24 UTC 2024
	
	I0501 04:58:29.689361    6468 fix.go:236] clock set: Wed May  1 04:58:24 UTC 2024
	 (err=<nil>)
	I0501 04:58:29.689361    6468 start.go:83] releasing machines lock for "kubernetes-upgrade-195400", held for 1m3.5405964s
	I0501 04:58:29.689751    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:32.124928    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:32.124928    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:32.125096    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:34.860967    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:34.860967    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:34.867251    6468 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:58:34.867398    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:34.884178    6468 ssh_runner.go:195] Run: cat /version.json
	I0501 04:58:34.884178    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:37.278248    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:37.279156    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:37.278248    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:37.279257    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:37.279257    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:37.279317    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:40.088047    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:40.088047    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:40.089044    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:58:40.126031    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:40.126031    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:40.127045    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:58:42.195910    6468 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.3286047s)
	I0501 04:58:42.195910    6468 ssh_runner.go:235] Completed: cat /version.json: (7.3116786s)
	W0501 04:58:42.252541    6468 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0501 04:58:42.252671    6468 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	! This VM is having trouble accessing https://registry.k8s.io
	W0501 04:58:42.252671    6468 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0501 04:58:42.266782    6468 ssh_runner.go:195] Run: systemctl --version
	I0501 04:58:42.299826    6468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 04:58:42.311066    6468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:58:42.325977    6468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0501 04:58:42.361977    6468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0501 04:58:42.393946    6468 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:58:42.393946    6468 start.go:494] detecting cgroup driver to use...
	I0501 04:58:42.393946    6468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:58:42.451421    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 04:58:42.491670    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:58:42.515251    6468 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:58:42.529805    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:58:42.575845    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:58:42.615289    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:58:42.666156    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:58:42.703178    6468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:58:42.740685    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:58:42.779036    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:58:42.816548    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:58:42.853894    6468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:58:42.888889    6468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:58:42.934137    6468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:58:43.230082    6468 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:58:43.275396    6468 start.go:494] detecting cgroup driver to use...
	I0501 04:58:43.290065    6468 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:58:43.333097    6468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:58:43.378598    6468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:58:43.444635    6468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:58:43.498697    6468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:58:43.529120    6468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:58:43.584597    6468 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:58:43.607869    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:58:43.629186    6468 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:58:43.683118    6468 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:58:43.989355    6468 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:58:44.253890    6468 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:58:44.253890    6468 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:58:44.306318    6468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:58:44.595273    6468 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 04:59:56.007475    6468 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4116662s)
	I0501 04:59:56.022139    6468 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0501 04:59:56.094982    6468 out.go:177] 
	W0501 04:59:56.098571    6468 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 04:50:37 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.737751575Z" level=info msg="Starting up"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.738767487Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.746848778Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.781270866Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814639343Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814782345Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814868146Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814902646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.815922757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816066359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816719366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816948569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816976269Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816989970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.817593876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.818498287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822173428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822643133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822983337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.823067538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824101850Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824224951Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824397953Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.827996294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828138495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828183996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828202296Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828218696Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828312797Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828748402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828862604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828984105Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829023605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829038306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829052406Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829065506Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829080006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829095006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829108906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829127407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829139607Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829160107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829178007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829192407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829207007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829220408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829234008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829245808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829258508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829272108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829286808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829301909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829392210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829428510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829446310Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829473210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829488211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829503311Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829772114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829817214Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829833715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829846415Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829922316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829964316Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830001116Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830468122Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830677224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830824826Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830857126Z" level=info msg="containerd successfully booted in 0.052527s"
	May 01 04:50:38 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:38.809503508Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:50:38 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:38.949752892Z" level=info msg="Loading containers: start."
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.376127664Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.469733044Z" level=info msg="Loading containers: done."
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.498790517Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.499762926Z" level=info msg="Daemon has completed initialization"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.560623799Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.560755300Z" level=info msg="API listen on [::]:2376"
	May 01 04:50:39 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.350114100Z" level=info msg="Processing signal 'terminated'"
	May 01 04:51:07 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.352882798Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.353987198Z" level=info msg="Daemon shutdown complete"
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.354309097Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.354366297Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.435419937Z" level=info msg="Starting up"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.437918636Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.442687333Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1134
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.479597914Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512343397Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512395897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512441897Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512460097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512493297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512507597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512750197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512793797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512812597Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512834197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512861797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.513053497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516269495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516423695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516614395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516771295Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516821095Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516843395Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516856095Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517079095Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517318495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517346095Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517364195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517398295Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517451995Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517882494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518072194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518176094Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518199094Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518215394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518231394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518302194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518326294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518345894Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518378894Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518394194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518408194Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518431094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518472194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518488394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518503394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518517394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518532494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518547494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518573194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518591894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518612294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518676394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518697594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518712994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518741494Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518784094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518883794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518908794Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518961794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519059994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519079794Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519093694Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519185394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519281894Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519300194Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519840093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520002693Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520098793Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520259193Z" level=info msg="containerd successfully booted in 0.041516s"
	May 01 04:51:09 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:09.679093592Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:51:09 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:09.726281468Z" level=info msg="Loading containers: start."
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.252231895Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.339345050Z" level=info msg="Loading containers: done."
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.362354538Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.362550238Z" level=info msg="Daemon has completed initialization"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.413767912Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.414020712Z" level=info msg="API listen on [::]:2376"
	May 01 04:51:10 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:23 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.320729422Z" level=info msg="Processing signal 'terminated'"
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322318821Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322584021Z" level=info msg="Daemon shutdown complete"
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322679921Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322714120Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.407789758Z" level=info msg="Starting up"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.409499257Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.413531855Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1548
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.447762137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480301720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480447820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480505020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480523520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480556820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480571120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480875520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480977020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481000920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481014520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481044220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481264720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.484750318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.484899518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485276818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485404218Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485456518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485494918Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485515618Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485974318Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486037317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486059217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486077817Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486095617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486157717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486405817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486558717Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486779117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486909017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486956417Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487081817Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487103517Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487123117Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487140217Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487155517Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487170317Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487183417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487206017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487221817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487239617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487256817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487271917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487287517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487301117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487315617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487330317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487347717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487362417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487375817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487390117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487407717Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487431317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487447017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487464617Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487609717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487847517Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487866817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487895317Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487965416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488005916Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488020216Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488290216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488786116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488901316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488947916Z" level=info msg="containerd successfully booted in 0.043088s"
	May 01 04:51:25 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:25.460814712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.441518504Z" level=info msg="Loading containers: start."
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.746797846Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.843216196Z" level=info msg="Loading containers: done."
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.871866881Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.872160181Z" level=info msg="Daemon has completed initialization"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.932792949Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.932894949Z" level=info msg="API listen on [::]:2376"
	May 01 04:51:26 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.413186729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.414235684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.414546271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.415078249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424127568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424610747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424794440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.425193723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.542976762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.543400544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.545839841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.546397518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.577947289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578034185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578048784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578159080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.894754445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.894994034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.895186926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.895387118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087263859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087360555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087381154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087506249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176019851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176225443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176496032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176787121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.187390202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188528257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188594754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188768047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.128344007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129249380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129484474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129738966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197014331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197313923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197801709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.198518588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240403683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240610677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240816371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.241779844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879119437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879206734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879239133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879390529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.037846206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038017206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038129306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038431606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.177136366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.178119766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.178320766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.179146567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:52.319143874Z" level=info msg="ignoring event" container=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321379575Z" level=info msg="shim disconnected" id=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321488875Z" level=warning msg="cleaning up after shim disconnected" id=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321504075Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:52.495136752Z" level=info msg="ignoring event" container=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495289052Z" level=info msg="shim disconnected" id=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495438452Z" level=warning msg="cleaning up after shim disconnected" id=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495517952Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013242982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013356882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013378582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013491682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448309876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448587876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448614576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.449788876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.706557891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.706871191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.707251391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.707594791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847005754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847714754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847764954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847899454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:57.316030045Z" level=info msg="ignoring event" container=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.318951753Z" level=info msg="shim disconnected" id=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.319103754Z" level=warning msg="cleaning up after shim disconnected" id=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.319127354Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501270776Z" level=info msg="shim disconnected" id=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501439176Z" level=warning msg="cleaning up after shim disconnected" id=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501465976Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:57.503202381Z" level=info msg="ignoring event" container=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:52:10.217053199Z" level=info msg="ignoring event" container=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.217810700Z" level=info msg="shim disconnected" id=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 namespace=moby
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.218203700Z" level=warning msg="cleaning up after shim disconnected" id=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 namespace=moby
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.218436801Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.725718036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727251739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727496339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727823439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:53:50.826713206Z" level=info msg="ignoring event" container=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829354210Z" level=info msg="shim disconnected" id=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 namespace=moby
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829586110Z" level=warning msg="cleaning up after shim disconnected" id=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 namespace=moby
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829618410Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.140540023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.144227528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.144466028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.145094629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:55:40.302862066Z" level=info msg="ignoring event" container=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.303441167Z" level=info msg="shim disconnected" id=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.303934667Z" level=warning msg="cleaning up after shim disconnected" id=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.304005467Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539544252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539685752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539908752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.540327953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773728874Z" level=info msg="shim disconnected" id=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773845074Z" level=warning msg="cleaning up after shim disconnected" id=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773869274Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:57:30.774610675Z" level=info msg="ignoring event" container=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021150449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021262949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021284349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021391949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:58:44 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.627843811Z" level=info msg="Processing signal 'terminated'"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.897162901Z" level=info msg="ignoring event" container=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.897513901Z" level=info msg="shim disconnected" id=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.898294202Z" level=warning msg="cleaning up after shim disconnected" id=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.899367102Z" level=info msg="ignoring event" container=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.900821204Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.899870203Z" level=info msg="shim disconnected" id=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.902192404Z" level=warning msg="cleaning up after shim disconnected" id=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.902343805Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.908141309Z" level=info msg="ignoring event" container=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.908581809Z" level=info msg="shim disconnected" id=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.909757610Z" level=warning msg="cleaning up after shim disconnected" id=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.909868210Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.912899312Z" level=info msg="shim disconnected" id=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.913016012Z" level=warning msg="cleaning up after shim disconnected" id=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.913071912Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.913308012Z" level=info msg="ignoring event" container=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.927245222Z" level=info msg="ignoring event" container=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932102326Z" level=info msg="shim disconnected" id=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932166426Z" level=warning msg="cleaning up after shim disconnected" id=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932177826Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947662136Z" level=info msg="shim disconnected" id=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947851637Z" level=warning msg="cleaning up after shim disconnected" id=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947991237Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959224745Z" level=info msg="shim disconnected" id=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959700845Z" level=warning msg="cleaning up after shim disconnected" id=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959887445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980052959Z" level=info msg="shim disconnected" id=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980291559Z" level=warning msg="cleaning up after shim disconnected" id=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980364159Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.989875866Z" level=info msg="ignoring event" container=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.989960566Z" level=info msg="ignoring event" container=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.990003666Z" level=info msg="ignoring event" container=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.990020766Z" level=info msg="ignoring event" container=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.994298969Z" level=info msg="shim disconnected" id=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.995178470Z" level=warning msg="cleaning up after shim disconnected" id=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.995325270Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.016571985Z" level=info msg="ignoring event" container=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.017610086Z" level=info msg="ignoring event" container=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.019266687Z" level=info msg="shim disconnected" id=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.024832391Z" level=warning msg="cleaning up after shim disconnected" id=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.035972699Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.022186289Z" level=info msg="shim disconnected" id=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.040723802Z" level=warning msg="cleaning up after shim disconnected" id=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.040854102Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.172240194Z" level=info msg="ignoring event" container=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174360396Z" level=info msg="shim disconnected" id=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174623696Z" level=warning msg="cleaning up after shim disconnected" id=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174703096Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.206821619Z" level=warning msg="cleanup warnings time=\"2024-05-01T04:58:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:49.794608347Z" level=info msg="ignoring event" container=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794869547Z" level=info msg="shim disconnected" id=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794927047Z" level=warning msg="cleaning up after shim disconnected" id=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794937947Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.741102948Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.791622809Z" level=info msg="ignoring event" container=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792569753Z" level=info msg="shim disconnected" id=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792713060Z" level=warning msg="cleaning up after shim disconnected" id=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792893468Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879540419Z" level=info msg="Daemon shutdown complete"
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879629923Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879801131Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879848133Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: docker.service: Consumed 13.841s CPU time.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:58:55 kubernetes-upgrade-195400 dockerd[5745]: time="2024-05-01T04:58:55.976161779Z" level=info msg="Starting up"
	May 01 04:59:56 kubernetes-upgrade-195400 dockerd[5745]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 04:50:37 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.737751575Z" level=info msg="Starting up"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.738767487Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.746848778Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.781270866Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814639343Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814782345Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814868146Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814902646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.815922757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816066359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816719366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816948569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816976269Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816989970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.817593876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.818498287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822173428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822643133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822983337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.823067538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824101850Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824224951Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824397953Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.827996294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828138495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828183996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828202296Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828218696Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828312797Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828748402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828862604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828984105Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829023605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829038306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829052406Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829065506Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829080006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829095006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829108906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829127407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829139607Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829160107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829178007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829192407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829207007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829220408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829234008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829245808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829258508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829272108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829286808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829301909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829392210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829428510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829446310Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829473210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829488211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829503311Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829772114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829817214Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829833715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829846415Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829922316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829964316Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830001116Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830468122Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830677224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830824826Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830857126Z" level=info msg="containerd successfully booted in 0.052527s"
	May 01 04:50:38 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:38.809503508Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:50:38 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:38.949752892Z" level=info msg="Loading containers: start."
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.376127664Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.469733044Z" level=info msg="Loading containers: done."
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.498790517Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.499762926Z" level=info msg="Daemon has completed initialization"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.560623799Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.560755300Z" level=info msg="API listen on [::]:2376"
	May 01 04:50:39 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.350114100Z" level=info msg="Processing signal 'terminated'"
	May 01 04:51:07 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.352882798Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.353987198Z" level=info msg="Daemon shutdown complete"
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.354309097Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.354366297Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.435419937Z" level=info msg="Starting up"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.437918636Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.442687333Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1134
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.479597914Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512343397Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512395897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512441897Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512460097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512493297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512507597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512750197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512793797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512812597Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512834197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512861797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.513053497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516269495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516423695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516614395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516771295Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516821095Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516843395Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516856095Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517079095Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517318495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517346095Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517364195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517398295Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517451995Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517882494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518072194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518176094Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518199094Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518215394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518231394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518302194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518326294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518345894Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518378894Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518394194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518408194Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518431094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518472194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518488394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518503394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518517394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518532494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518547494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518573194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518591894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518612294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518676394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518697594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518712994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518741494Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518784094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518883794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518908794Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518961794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519059994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519079794Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519093694Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519185394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519281894Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519300194Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519840093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520002693Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520098793Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520259193Z" level=info msg="containerd successfully booted in 0.041516s"
	May 01 04:51:09 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:09.679093592Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:51:09 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:09.726281468Z" level=info msg="Loading containers: start."
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.252231895Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.339345050Z" level=info msg="Loading containers: done."
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.362354538Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.362550238Z" level=info msg="Daemon has completed initialization"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.413767912Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.414020712Z" level=info msg="API listen on [::]:2376"
	May 01 04:51:10 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:23 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.320729422Z" level=info msg="Processing signal 'terminated'"
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322318821Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322584021Z" level=info msg="Daemon shutdown complete"
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322679921Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322714120Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.407789758Z" level=info msg="Starting up"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.409499257Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.413531855Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1548
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.447762137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480301720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480447820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480505020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480523520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480556820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480571120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480875520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480977020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481000920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481014520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481044220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481264720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.484750318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.484899518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485276818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485404218Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485456518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485494918Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485515618Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485974318Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486037317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486059217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486077817Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486095617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486157717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486405817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486558717Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486779117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486909017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486956417Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487081817Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487103517Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487123117Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487140217Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487155517Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487170317Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487183417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487206017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487221817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487239617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487256817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487271917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487287517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487301117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487315617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487330317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487347717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487362417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487375817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487390117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487407717Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487431317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487447017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487464617Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487609717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487847517Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487866817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487895317Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487965416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488005916Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488020216Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488290216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488786116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488901316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488947916Z" level=info msg="containerd successfully booted in 0.043088s"
	May 01 04:51:25 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:25.460814712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.441518504Z" level=info msg="Loading containers: start."
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.746797846Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.843216196Z" level=info msg="Loading containers: done."
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.871866881Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.872160181Z" level=info msg="Daemon has completed initialization"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.932792949Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.932894949Z" level=info msg="API listen on [::]:2376"
	May 01 04:51:26 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.413186729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.414235684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.414546271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.415078249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424127568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424610747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424794440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.425193723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.542976762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.543400544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.545839841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.546397518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.577947289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578034185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578048784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578159080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.894754445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.894994034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.895186926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.895387118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087263859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087360555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087381154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087506249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176019851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176225443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176496032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176787121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.187390202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188528257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188594754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188768047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.128344007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129249380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129484474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129738966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197014331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197313923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197801709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.198518588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240403683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240610677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240816371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.241779844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879119437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879206734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879239133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879390529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.037846206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038017206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038129306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038431606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.177136366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.178119766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.178320766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.179146567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:52.319143874Z" level=info msg="ignoring event" container=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321379575Z" level=info msg="shim disconnected" id=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321488875Z" level=warning msg="cleaning up after shim disconnected" id=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321504075Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:52.495136752Z" level=info msg="ignoring event" container=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495289052Z" level=info msg="shim disconnected" id=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495438452Z" level=warning msg="cleaning up after shim disconnected" id=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495517952Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013242982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013356882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013378582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013491682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448309876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448587876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448614576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.449788876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.706557891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.706871191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.707251391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.707594791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847005754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847714754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847764954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847899454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:57.316030045Z" level=info msg="ignoring event" container=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.318951753Z" level=info msg="shim disconnected" id=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.319103754Z" level=warning msg="cleaning up after shim disconnected" id=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.319127354Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501270776Z" level=info msg="shim disconnected" id=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501439176Z" level=warning msg="cleaning up after shim disconnected" id=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501465976Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:57.503202381Z" level=info msg="ignoring event" container=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:52:10.217053199Z" level=info msg="ignoring event" container=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.217810700Z" level=info msg="shim disconnected" id=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 namespace=moby
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.218203700Z" level=warning msg="cleaning up after shim disconnected" id=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 namespace=moby
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.218436801Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.725718036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727251739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727496339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727823439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:53:50.826713206Z" level=info msg="ignoring event" container=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829354210Z" level=info msg="shim disconnected" id=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 namespace=moby
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829586110Z" level=warning msg="cleaning up after shim disconnected" id=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 namespace=moby
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829618410Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.140540023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.144227528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.144466028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.145094629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:55:40.302862066Z" level=info msg="ignoring event" container=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.303441167Z" level=info msg="shim disconnected" id=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.303934667Z" level=warning msg="cleaning up after shim disconnected" id=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.304005467Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539544252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539685752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539908752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.540327953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773728874Z" level=info msg="shim disconnected" id=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773845074Z" level=warning msg="cleaning up after shim disconnected" id=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773869274Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:57:30.774610675Z" level=info msg="ignoring event" container=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021150449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021262949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021284349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021391949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:58:44 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.627843811Z" level=info msg="Processing signal 'terminated'"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.897162901Z" level=info msg="ignoring event" container=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.897513901Z" level=info msg="shim disconnected" id=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.898294202Z" level=warning msg="cleaning up after shim disconnected" id=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.899367102Z" level=info msg="ignoring event" container=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.900821204Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.899870203Z" level=info msg="shim disconnected" id=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.902192404Z" level=warning msg="cleaning up after shim disconnected" id=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.902343805Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.908141309Z" level=info msg="ignoring event" container=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.908581809Z" level=info msg="shim disconnected" id=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.909757610Z" level=warning msg="cleaning up after shim disconnected" id=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.909868210Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.912899312Z" level=info msg="shim disconnected" id=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.913016012Z" level=warning msg="cleaning up after shim disconnected" id=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.913071912Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.913308012Z" level=info msg="ignoring event" container=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.927245222Z" level=info msg="ignoring event" container=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932102326Z" level=info msg="shim disconnected" id=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932166426Z" level=warning msg="cleaning up after shim disconnected" id=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932177826Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947662136Z" level=info msg="shim disconnected" id=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947851637Z" level=warning msg="cleaning up after shim disconnected" id=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947991237Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959224745Z" level=info msg="shim disconnected" id=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959700845Z" level=warning msg="cleaning up after shim disconnected" id=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959887445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980052959Z" level=info msg="shim disconnected" id=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980291559Z" level=warning msg="cleaning up after shim disconnected" id=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980364159Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.989875866Z" level=info msg="ignoring event" container=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.989960566Z" level=info msg="ignoring event" container=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.990003666Z" level=info msg="ignoring event" container=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.990020766Z" level=info msg="ignoring event" container=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.994298969Z" level=info msg="shim disconnected" id=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.995178470Z" level=warning msg="cleaning up after shim disconnected" id=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.995325270Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.016571985Z" level=info msg="ignoring event" container=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.017610086Z" level=info msg="ignoring event" container=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.019266687Z" level=info msg="shim disconnected" id=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.024832391Z" level=warning msg="cleaning up after shim disconnected" id=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.035972699Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.022186289Z" level=info msg="shim disconnected" id=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.040723802Z" level=warning msg="cleaning up after shim disconnected" id=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.040854102Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.172240194Z" level=info msg="ignoring event" container=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174360396Z" level=info msg="shim disconnected" id=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174623696Z" level=warning msg="cleaning up after shim disconnected" id=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174703096Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.206821619Z" level=warning msg="cleanup warnings time=\"2024-05-01T04:58:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:49.794608347Z" level=info msg="ignoring event" container=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794869547Z" level=info msg="shim disconnected" id=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794927047Z" level=warning msg="cleaning up after shim disconnected" id=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794937947Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.741102948Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.791622809Z" level=info msg="ignoring event" container=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792569753Z" level=info msg="shim disconnected" id=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792713060Z" level=warning msg="cleaning up after shim disconnected" id=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792893468Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879540419Z" level=info msg="Daemon shutdown complete"
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879629923Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879801131Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879848133Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: docker.service: Consumed 13.841s CPU time.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:58:55 kubernetes-upgrade-195400 dockerd[5745]: time="2024-05-01T04:58:55.976161779Z" level=info msg="Starting up"
	May 01 04:59:56 kubernetes-upgrade-195400 dockerd[5745]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0501 04:59:56.099571    6468 out.go:239] * 
	* 
	W0501 04:59:56.101942    6468 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 04:59:56.106522    6468 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-195400 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-01 04:59:56.6147961 +0000 UTC m=+10260.957534101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-195400 -n kubernetes-upgrade-195400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-195400 -n kubernetes-upgrade-195400: exit status 2 (12.2634081s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:59:56.754416    5040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-195400 logs -n 25
E0501 05:01:35.023303   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-195400 logs -n 25: (2m48.229332s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args               |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-406600 sudo            | cilium-406600             | minikube6\jenkins | v1.33.0 | 01 May 24 04:37 UTC |                     |
	|         | systemctl cat crio --no-pager    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-406600 sudo find       | cilium-406600             | minikube6\jenkins | v1.33.0 | 01 May 24 04:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c    |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;             |                           |                   |         |                     |                     |
	| ssh     | -p cilium-406600 sudo crio       | cilium-406600             | minikube6\jenkins | v1.33.0 | 01 May 24 04:37 UTC |                     |
	|         | config                           |                           |                   |         |                     |                     |
	| delete  | -p cilium-406600                 | cilium-406600             | minikube6\jenkins | v1.33.0 | 01 May 24 04:37 UTC | 01 May 24 04:37 UTC |
	| start   | -p docker-flags-390200           | docker-flags-390200       | minikube6\jenkins | v1.33.0 | 01 May 24 04:37 UTC | 01 May 24 04:47 UTC |
	|         | --cache-images=false             |                           |                   |         |                     |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --install-addons=false           |                           |                   |         |                     |                     |
	|         | --wait=false                     |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR             |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT             |                           |                   |         |                     |                     |
	|         | --docker-opt=debug               |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true            |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-120700         | offline-docker-120700     | minikube6\jenkins | v1.33.0 | 01 May 24 04:41 UTC | 01 May 24 04:42 UTC |
	| start   | -p force-systemd-env-005100      | force-systemd-env-005100  | minikube6\jenkins | v1.33.0 | 01 May 24 04:42 UTC | 01 May 24 04:50 UTC |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-195400     | kubernetes-upgrade-195400 | minikube6\jenkins | v1.33.0 | 01 May 24 04:43 UTC | 01 May 24 04:44 UTC |
	| start   | -p kubernetes-upgrade-195400     | kubernetes-upgrade-195400 | minikube6\jenkins | v1.33.0 | 01 May 24 04:44 UTC | 01 May 24 04:51 UTC |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-120700 stop      | minikube                  | minikube6\jenkins | v1.26.0 | 01 May 24 04:45 GMT | 01 May 24 04:46 GMT |
	| start   | -p stopped-upgrade-120700        | stopped-upgrade-120700    | minikube6\jenkins | v1.33.0 | 01 May 24 04:46 UTC | 01 May 24 04:53 UTC |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | docker-flags-390200 ssh          | docker-flags-390200       | minikube6\jenkins | v1.33.0 | 01 May 24 04:47 UTC | 01 May 24 04:47 UTC |
	|         | sudo systemctl show docker       |                           |                   |         |                     |                     |
	|         | --property=Environment           |                           |                   |         |                     |                     |
	|         | --no-pager                       |                           |                   |         |                     |                     |
	| ssh     | docker-flags-390200 ssh          | docker-flags-390200       | minikube6\jenkins | v1.33.0 | 01 May 24 04:47 UTC | 01 May 24 04:48 UTC |
	|         | sudo systemctl show docker       |                           |                   |         |                     |                     |
	|         | --property=ExecStart             |                           |                   |         |                     |                     |
	|         | --no-pager                       |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-390200           | docker-flags-390200       | minikube6\jenkins | v1.33.0 | 01 May 24 04:48 UTC | 01 May 24 04:48 UTC |
	| start   | -p force-systemd-flag-122500     | force-systemd-flag-122500 | minikube6\jenkins | v1.33.0 | 01 May 24 04:48 UTC | 01 May 24 04:56 UTC |
	|         | --memory=2048 --force-systemd    |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-005100         | force-systemd-env-005100  | minikube6\jenkins | v1.33.0 | 01 May 24 04:50 UTC | 01 May 24 04:50 UTC |
	|         | ssh docker info --format         |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-005100      | force-systemd-env-005100  | minikube6\jenkins | v1.33.0 | 01 May 24 04:50 UTC | 01 May 24 04:51 UTC |
	| start   | -p cert-expiration-386600        | cert-expiration-386600    | minikube6\jenkins | v1.33.0 | 01 May 24 04:51 UTC | 01 May 24 04:58 UTC |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m             |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-195400     | kubernetes-upgrade-195400 | minikube6\jenkins | v1.33.0 | 01 May 24 04:51 UTC |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0     |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-195400     | kubernetes-upgrade-195400 | minikube6\jenkins | v1.33.0 | 01 May 24 04:51 UTC |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-120700        | stopped-upgrade-120700    | minikube6\jenkins | v1.33.0 | 01 May 24 04:53 UTC | 01 May 24 04:54 UTC |
	| start   | -p cert-options-374100           | cert-options-374100       | minikube6\jenkins | v1.33.0 | 01 May 24 04:54 UTC |                     |
	|         | --memory=2048                    |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1        |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15    |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost      |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                  |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-122500        | force-systemd-flag-122500 | minikube6\jenkins | v1.33.0 | 01 May 24 04:56 UTC | 01 May 24 04:56 UTC |
	|         | ssh docker info --format         |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-122500     | force-systemd-flag-122500 | minikube6\jenkins | v1.33.0 | 01 May 24 04:56 UTC |                     |
	| start   | -p running-upgrade-449000        | minikube                  | minikube6\jenkins | v1.26.0 | 01 May 24 04:58 GMT |                     |
	|         | --memory=2200                    |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv               |                           |                   |         |                     |                     |
	|---------|----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 04:58:19
	Running on machine: minikube6
	Binary: Built with gc go1.18.3 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 04:58:19.389246    3424 out.go:296] Setting OutFile to fd 1772 ...
	I0501 04:58:19.486812    3424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0501 04:58:19.486812    3424 out.go:309] Setting ErrFile to fd 1848...
	I0501 04:58:19.486812    3424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0501 04:58:19.519681    3424 out.go:303] Setting JSON to false
	I0501 04:58:19.522672    3424 start.go:115] hostinfo: {"hostname":"minikube6","uptime":112553,"bootTime":1714426946,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045 Build 19045","kernelVersion":"10.0.19045 Build 19045","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 04:58:19.522672    3424 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0501 04:58:19.527664    3424 out.go:177] * [running-upgrade-449000] minikube v1.26.0 on Microsoft Windows 10 Enterprise N 10.0.19045 Build 19045
	I0501 04:58:19.531671    3424 notify.go:193] Checking for updates...
	I0501 04:58:19.537678    3424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 04:58:19.543691    3424 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 04:58:19.545677    3424 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 04:58:19.548686    3424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 04:58:19.553700    3424 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\AppData\Local\Temp\legacy_kubeconfig1857007689
	I0501 04:58:18.753023    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:18.753299    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:18.753299    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:58:18.875315    6468 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1046863s)
	I0501 04:58:18.893481    6468 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 04:58:18.901035    6468 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 04:58:18.901035    6468 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0501 04:58:18.901035    6468 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0501 04:58:18.901975    6468 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem -> 142882.pem in /etc/ssl/certs
	I0501 04:58:18.916856    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0501 04:58:18.936576    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\142882.pem --> /etc/ssl/certs/142882.pem (1708 bytes)
	I0501 04:58:18.997168    6468 start.go:296] duration metric: took 5.2459505s for postStartSetup
	I0501 04:58:18.997709    6468 fix.go:56] duration metric: took 52.848247s for fixHost
	I0501 04:58:18.997884    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:21.300038    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:21.300903    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:21.301180    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:23.597689    6036 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0501 04:58:23.597689    6036 kubeadm.go:309] [preflight] Running pre-flight checks
	I0501 04:58:23.597689    6036 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0501 04:58:23.598231    6036 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0501 04:58:23.598568    6036 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0501 04:58:23.598568    6036 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0501 04:58:23.607588    6036 out.go:204]   - Generating certificates and keys ...
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0501 04:58:23.608499    6036 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-386600 localhost] and IPs [172.28.223.149 127.0.0.1 ::1]
	I0501 04:58:23.609815    6036 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0501 04:58:23.609815    6036 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-386600 localhost] and IPs [172.28.223.149 127.0.0.1 ::1]
	I0501 04:58:23.609815    6036 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0501 04:58:23.609815    6036 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0501 04:58:23.609815    6036 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0501 04:58:23.609815    6036 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0501 04:58:23.609815    6036 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0501 04:58:23.609815    6036 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0501 04:58:23.610841    6036 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0501 04:58:23.610841    6036 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0501 04:58:23.610841    6036 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0501 04:58:23.610841    6036 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0501 04:58:23.610841    6036 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0501 04:58:23.615840    6036 out.go:204]   - Booting up control plane ...
	I0501 04:58:23.616754    6036 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0501 04:58:23.616754    6036 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0501 04:58:23.616754    6036 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0501 04:58:23.616754    6036 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0501 04:58:23.616754    6036 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0501 04:58:23.616754    6036 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0501 04:58:23.616754    6036 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0501 04:58:23.617781    6036 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0501 04:58:23.617781    6036 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002013204s
	I0501 04:58:23.617781    6036 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0501 04:58:23.617781    6036 kubeadm.go:309] [api-check] The API server is healthy after 16.502417804s
	I0501 04:58:23.617781    6036 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0501 04:58:23.618745    6036 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0501 04:58:23.618745    6036 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0501 04:58:23.618745    6036 kubeadm.go:309] [mark-control-plane] Marking the node cert-expiration-386600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0501 04:58:23.618745    6036 kubeadm.go:309] [bootstrap-token] Using token: dkj57q.ol5sxtp1lie7wjtp
	I0501 04:58:23.622852    6036 out.go:204]   - Configuring RBAC rules ...
	I0501 04:58:23.622852    6036 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0501 04:58:23.623144    6036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0501 04:58:23.623144    6036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0501 04:58:23.623144    6036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0501 04:58:23.623144    6036 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0501 04:58:23.624145    6036 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0501 04:58:23.624145    6036 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0501 04:58:23.624145    6036 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0501 04:58:23.624145    6036 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0501 04:58:23.624145    6036 kubeadm.go:309] 
	I0501 04:58:23.624145    6036 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0501 04:58:23.624145    6036 kubeadm.go:309] 
	I0501 04:58:23.624145    6036 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0501 04:58:23.624145    6036 kubeadm.go:309] 
	I0501 04:58:23.624145    6036 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0501 04:58:23.625162    6036 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0501 04:58:23.625162    6036 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0501 04:58:23.625162    6036 kubeadm.go:309] 
	I0501 04:58:23.625162    6036 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0501 04:58:23.625162    6036 kubeadm.go:309] 
	I0501 04:58:23.625162    6036 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0501 04:58:23.625162    6036 kubeadm.go:309] 
	I0501 04:58:23.625162    6036 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0501 04:58:23.625162    6036 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0501 04:58:23.625162    6036 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0501 04:58:23.625162    6036 kubeadm.go:309] 
	I0501 04:58:23.625162    6036 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0501 04:58:23.626157    6036 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0501 04:58:23.626157    6036 kubeadm.go:309] 
	I0501 04:58:23.626157    6036 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dkj57q.ol5sxtp1lie7wjtp \
	I0501 04:58:23.626157    6036 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 \
	I0501 04:58:23.626157    6036 kubeadm.go:309] 	--control-plane 
	I0501 04:58:23.626157    6036 kubeadm.go:309] 
	I0501 04:58:23.626157    6036 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0501 04:58:23.626157    6036 kubeadm.go:309] 
	I0501 04:58:23.626157    6036 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dkj57q.ol5sxtp1lie7wjtp \
	I0501 04:58:23.627159    6036 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:4798dcaffc6298c78af5ad06745de18c1231853c65c9db4cd09ab1be96f5e875 
	I0501 04:58:23.627159    6036 cni.go:84] Creating CNI manager for ""
	I0501 04:58:23.627159    6036 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 04:58:23.631195    6036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 04:58:23.653364    6036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 04:58:23.677763    6036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 04:58:23.718646    6036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 04:58:23.734793    6036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-386600 minikube.k8s.io/updated_at=2024_05_01T04_58_23_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2c4eae41cda912e6a762d77f0d8868e00f97bb4e minikube.k8s.io/name=cert-expiration-386600 minikube.k8s.io/primary=true
	I0501 04:58:23.735705    6036 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0501 04:58:23.751105    6036 ops.go:34] apiserver oom_adj: -16
	I0501 04:58:24.350152    6036 kubeadm.go:1107] duration metric: took 631.5012ms to wait for elevateKubeSystemPrivileges
	W0501 04:58:24.350273    6036 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0501 04:58:24.350273    6036 kubeadm.go:393] duration metric: took 24.4596125s to StartCluster
	I0501 04:58:24.350273    6036 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:58:24.350413    6036 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 04:58:24.352283    6036 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:58:24.354137    6036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0501 04:58:24.354137    6036 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.223.149 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 04:58:24.358875    6036 out.go:177] * Verifying Kubernetes components...
	I0501 04:58:24.354137    6036 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 04:58:19.557685    3424 config.go:178] Loaded profile config "cert-expiration-386600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:58:19.558691    3424 config.go:178] Loaded profile config "cert-options-374100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:58:19.558691    3424 config.go:178] Loaded profile config "force-systemd-flag-122500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:58:19.559684    3424 config.go:178] Loaded profile config "kubernetes-upgrade-195400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:58:19.559684    3424 driver.go:360] Setting default libvirt URI to qemu:///system
	I0501 04:58:24.355192    6036 config.go:182] Loaded profile config "cert-expiration-386600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:58:24.358875    6036 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-386600"
	I0501 04:58:24.358875    6036 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-386600"
	I0501 04:58:24.361875    6036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-386600"
	I0501 04:58:24.361875    6036 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-386600"
	I0501 04:58:24.361875    6036 host.go:66] Checking if "cert-expiration-386600" exists ...
	I0501 04:58:24.362874    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-386600 ).state
	I0501 04:58:24.362874    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-386600 ).state
	I0501 04:58:24.384883    6036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:58:24.823848    6036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.208.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0501 04:58:25.017767    6036 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 04:58:25.547887    6036 start.go:946] {"host.minikube.internal": 172.28.208.1} host record injected into CoreDNS's ConfigMap
	I0501 04:58:25.554746    6036 api_server.go:52] waiting for apiserver process to appear ...
	I0501 04:58:25.581721    6036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:58:25.617004    6036 api_server.go:72] duration metric: took 1.2628581s to wait for apiserver process to appear ...
	I0501 04:58:25.617160    6036 api_server.go:88] waiting for apiserver healthz status ...
	I0501 04:58:25.617239    6036 api_server.go:253] Checking apiserver healthz at https://172.28.223.149:8443/healthz ...
	I0501 04:58:25.626365    6036 api_server.go:279] https://172.28.223.149:8443/healthz returned 200:
	ok
	I0501 04:58:25.629328    6036 api_server.go:141] control plane version: v1.30.0
	I0501 04:58:25.633028    6036 api_server.go:131] duration metric: took 12.2475ms to wait for apiserver health ...
	I0501 04:58:25.633099    6036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 04:58:25.650480    6036 system_pods.go:59] 4 kube-system pods found
	I0501 04:58:25.650480    6036 system_pods.go:61] "etcd-cert-expiration-386600" [d2d1a577-55ab-4125-a099-08894c056e1c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0501 04:58:25.650480    6036 system_pods.go:61] "kube-apiserver-cert-expiration-386600" [40747408-53e0-4ed3-91f1-bb48caf337f3] Running
	I0501 04:58:25.650480    6036 system_pods.go:61] "kube-controller-manager-cert-expiration-386600" [c0830471-4cc0-44ea-9bc4-c0290b50db4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 04:58:25.650480    6036 system_pods.go:61] "kube-scheduler-cert-expiration-386600" [6e2f7c44-25a4-4924-8c16-86a58f5ac814] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 04:58:25.650480    6036 system_pods.go:74] duration metric: took 17.2757ms to wait for pod list to return data ...
	I0501 04:58:25.650480    6036 kubeadm.go:576] duration metric: took 1.2963342s to wait for: map[apiserver:true system_pods:true]
	I0501 04:58:25.650480    6036 node_conditions.go:102] verifying NodePressure condition ...
	I0501 04:58:25.659414    6036 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 04:58:25.659414    6036 node_conditions.go:123] node cpu capacity is 2
	I0501 04:58:25.659414    6036 node_conditions.go:105] duration metric: took 8.9333ms to run NodePressure ...
	I0501 04:58:25.659414    6036 start.go:240] waiting for startup goroutines ...
	I0501 04:58:26.052657    3424 out.go:177] * Using the hyperv driver based on user configuration
	I0501 04:58:26.057370    3424 start.go:284] selected driver: hyperv
	I0501 04:58:26.057370    3424 start.go:805] validating driver "hyperv" against <nil>
	I0501 04:58:26.057464    3424 start.go:816] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 04:58:26.117941    3424 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0501 04:58:26.118674    3424 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 04:58:26.118790    3424 cni.go:95] Creating CNI manager for ""
	I0501 04:58:26.118790    3424 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0501 04:58:26.118790    3424 start_flags.go:310] config:
	{Name:running-upgrade-449000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-449000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0501 04:58:26.119354    3424 iso.go:128] acquiring lock: {Name:mk0beb692ba59e158dd6c07b69df398f36f9b972 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 04:58:26.127656    3424 out.go:177] * Starting control plane node running-upgrade-449000 in cluster running-upgrade-449000
	I0501 04:58:26.072474    6036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-386600" context rescaled to 1 replicas
	I0501 04:58:26.866132    6036 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:26.866132    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:26.869741    6036 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-386600"
	I0501 04:58:26.869854    6036 host.go:66] Checking if "cert-expiration-386600" exists ...
	I0501 04:58:26.870331    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-386600 ).state
	I0501 04:58:26.904423    6036 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:26.904423    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:26.909360    6036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 04:58:24.082610    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:24.082610    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:24.089466    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:24.089630    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:24.089630    6468 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 04:58:24.226744    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714539504.227006498
	
	I0501 04:58:24.227459    6468 fix.go:216] guest clock: 1714539504.227006498
	I0501 04:58:24.227459    6468 fix.go:229] Guest: 2024-05-01 04:58:24.227006498 +0000 UTC Remote: 2024-05-01 04:58:18.9978528 +0000 UTC m=+387.104099801 (delta=5.229153698s)
	I0501 04:58:24.227459    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:26.729368    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:26.729673    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:26.729673    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:26.129652    3424 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0501 04:58:26.129652    3424 preload.go:148] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4
	I0501 04:58:26.130661    3424 cache.go:57] Caching tarball of preloaded images
	I0501 04:58:26.131663    3424 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 04:58:26.131663    3424 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on docker
	I0501 04:58:26.132661    3424 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-449000\config.json ...
	I0501 04:58:26.132661    3424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-449000\config.json: {Name:mk4d80d61137b1e2c86e24a8901a961abc5779d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 04:58:26.136671    3424 cache.go:208] Successfully downloaded all kic artifacts
	I0501 04:58:26.136671    3424 start.go:352] acquiring machines lock for running-upgrade-449000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 04:58:29.689751   14140 start.go:364] duration metric: took 3m37.6056398s to acquireMachinesLock for "cert-options-374100"
	I0501 04:58:29.689840   14140 start.go:93] Provisioning new machine with config: &{Name:cert-options-374100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:cert-options-374100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0501 04:58:29.689840   14140 start.go:125] createHost starting for "" (driver="hyperv")
	I0501 04:58:26.911840    6036 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 04:58:26.911840    6036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 04:58:26.911840    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-386600 ).state
	I0501 04:58:29.186678    6036 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:29.186678    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:29.186829    6036 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 04:58:29.186829    6036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 04:58:29.186829    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-386600 ).state
	I0501 04:58:29.306362    6036 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:29.306362    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:29.306362    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-386600 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:29.694817   14140 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0501 04:58:29.695352   14140 start.go:159] libmachine.API.Create for "cert-options-374100" (driver="hyperv")
	I0501 04:58:29.695352   14140 client.go:168] LocalClient.Create starting
	I0501 04:58:29.696797   14140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0501 04:58:29.696998   14140 main.go:141] libmachine: Decoding PEM data...
	I0501 04:58:29.696998   14140 main.go:141] libmachine: Parsing certificate...
	I0501 04:58:29.696998   14140 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0501 04:58:29.697655   14140 main.go:141] libmachine: Decoding PEM data...
	I0501 04:58:29.697655   14140 main.go:141] libmachine: Parsing certificate...
	I0501 04:58:29.697655   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0501 04:58:29.519104    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:29.520081    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:29.526050    6468 main.go:141] libmachine: Using SSH client type: native
	I0501 04:58:29.526502    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.213.192 22 <nil> <nil>}
	I0501 04:58:29.526617    6468 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1714539504
	I0501 04:58:29.689275    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May  1 04:58:24 UTC 2024
	
	I0501 04:58:29.689361    6468 fix.go:236] clock set: Wed May  1 04:58:24 UTC 2024
	 (err=<nil>)
	I0501 04:58:29.689361    6468 start.go:83] releasing machines lock for "kubernetes-upgrade-195400", held for 1m3.5405964s
	I0501 04:58:29.689751    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:31.668828    6036 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:31.668913    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:31.668913    6036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-386600 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:32.072066    6036 main.go:141] libmachine: [stdout =====>] : 172.28.223.149
	
	I0501 04:58:32.072066    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:32.072458    6036 sshutil.go:53] new ssh client: &{IP:172.28.223.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-386600\id_rsa Username:docker}
	I0501 04:58:32.245562    6036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 04:58:34.860967    6036 main.go:141] libmachine: [stdout =====>] : 172.28.223.149
	
	I0501 04:58:34.860967    6036 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:34.860967    6036 sshutil.go:53] new ssh client: &{IP:172.28.223.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-386600\id_rsa Username:docker}
	I0501 04:58:35.056888    6036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 04:58:35.272745    6036 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 04:58:35.276635    6036 addons.go:505] duration metric: took 10.9224172s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 04:58:35.276635    6036 start.go:245] waiting for cluster config update ...
	I0501 04:58:35.276635    6036 start.go:254] writing updated cluster config ...
	I0501 04:58:35.301934    6036 ssh_runner.go:195] Run: rm -f paused
	I0501 04:58:35.505563    6036 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 04:58:35.508041    6036 out.go:177] * Done! kubectl is now configured to use "cert-expiration-386600" cluster and "default" namespace by default
	I0501 04:58:31.914810   14140 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0501 04:58:31.915793   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:31.915854   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0501 04:58:33.893938   14140 main.go:141] libmachine: [stdout =====>] : False
	
	I0501 04:58:33.894002   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:33.894002   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 04:58:35.619804   14140 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 04:58:35.619892   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:35.619988   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 04:58:32.124928    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:32.124928    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:32.125096    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:34.860967    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:34.860967    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:34.867251    6468 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 04:58:34.867398    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:34.884178    6468 ssh_runner.go:195] Run: cat /version.json
	I0501 04:58:34.884178    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-195400 ).state
	I0501 04:58:39.689415   14140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 04:58:39.689415   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:39.692547   14140 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0501 04:58:40.173441   14140 main.go:141] libmachine: Creating SSH key...
	I0501 04:58:40.380947   14140 main.go:141] libmachine: Creating VM...
	I0501 04:58:40.380947   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0501 04:58:37.278248    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:37.279156    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:37.278248    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:58:37.279257    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:37.279257    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:37.279317    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-195400 ).networkadapters[0]).ipaddresses[0]
	I0501 04:58:40.088047    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:40.088047    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:40.089044    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:58:40.126031    6468 main.go:141] libmachine: [stdout =====>] : 172.28.213.192
	
	I0501 04:58:40.126031    6468 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:40.127045    6468 sshutil.go:53] new ssh client: &{IP:172.28.213.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-195400\id_rsa Username:docker}
	I0501 04:58:43.447706   14140 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0501 04:58:43.447957   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:43.447957   14140 main.go:141] libmachine: Using switch "Default Switch"
	I0501 04:58:43.447957   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0501 04:58:45.324056   14140 main.go:141] libmachine: [stdout =====>] : True
	
	I0501 04:58:45.324056   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:45.324619   14140 main.go:141] libmachine: Creating VHD
	I0501 04:58:45.324619   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0501 04:58:42.195910    6468 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (7.3286047s)
	I0501 04:58:42.195910    6468 ssh_runner.go:235] Completed: cat /version.json: (7.3116786s)
	W0501 04:58:42.252541    6468 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2001 milliseconds
	W0501 04:58:42.252671    6468 out.go:239] ! This VM is having trouble accessing https://registry.k8s.io
	W0501 04:58:42.252671    6468 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0501 04:58:42.266782    6468 ssh_runner.go:195] Run: systemctl --version
	I0501 04:58:42.299826    6468 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 04:58:42.311066    6468 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 04:58:42.325977    6468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0501 04:58:42.361977    6468 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0501 04:58:42.393946    6468 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0501 04:58:42.393946    6468 start.go:494] detecting cgroup driver to use...
	I0501 04:58:42.393946    6468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:58:42.451421    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 04:58:42.491670    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 04:58:42.515251    6468 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 04:58:42.529805    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 04:58:42.575845    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:58:42.615289    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 04:58:42.666156    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 04:58:42.703178    6468 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 04:58:42.740685    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 04:58:42.779036    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 04:58:42.816548    6468 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 04:58:42.853894    6468 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 04:58:42.888889    6468 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 04:58:42.934137    6468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:58:43.230082    6468 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 04:58:43.275396    6468 start.go:494] detecting cgroup driver to use...
	I0501 04:58:43.290065    6468 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0501 04:58:43.333097    6468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:58:43.378598    6468 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0501 04:58:43.444635    6468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0501 04:58:43.498697    6468 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 04:58:43.529120    6468 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 04:58:43.584597    6468 ssh_runner.go:195] Run: which cri-dockerd
	I0501 04:58:43.607869    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0501 04:58:43.629186    6468 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0501 04:58:43.683118    6468 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0501 04:58:43.989355    6468 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0501 04:58:44.253890    6468 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0501 04:58:44.253890    6468 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0501 04:58:44.306318    6468 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 04:58:44.595273    6468 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0501 04:58:49.023143   14140 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7DEFFFAE-0BFF-4B25-9A18-633F51FD3BB1
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0501 04:58:49.024016   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:49.024016   14140 main.go:141] libmachine: Writing magic tar header
	I0501 04:58:49.024113   14140 main.go:141] libmachine: Writing SSH key tar header
	I0501 04:58:49.024922   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0501 04:58:52.245970   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:58:52.246087   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:52.246192   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\disk.vhd' -SizeBytes 20000MB
	I0501 04:58:54.806800   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:58:54.807767   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:54.807767   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM cert-options-374100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0501 04:58:58.527710   14140 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	cert-options-374100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0501 04:58:58.527710   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:58:58.527710   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName cert-options-374100 -DynamicMemoryEnabled $false
	I0501 04:59:00.801459   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:00.801459   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:00.802186   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor cert-options-374100 -Count 2
	I0501 04:59:02.945308   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:02.945308   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:02.945308   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName cert-options-374100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\boot2docker.iso'
	I0501 04:59:05.524789   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:05.524789   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:05.525088   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName cert-options-374100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-options-374100\disk.vhd'
	I0501 04:59:08.208963   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:08.208963   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:08.208963   14140 main.go:141] libmachine: Starting VM...
	I0501 04:59:08.209762   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM cert-options-374100
	I0501 04:59:11.248513   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:11.248621   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:11.248621   14140 main.go:141] libmachine: Waiting for host to start...
	I0501 04:59:11.248780   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:13.470066   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:13.470066   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:13.470066   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:16.002348   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:16.002348   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:17.017064   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:19.201882   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:19.201882   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:19.201882   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:21.761679   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:21.761860   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:22.770828   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:24.995088   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:24.995088   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:24.996100   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:27.585395   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:27.585395   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:28.593937   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:30.818219   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:30.818219   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:30.818776   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:33.365395   14140 main.go:141] libmachine: [stdout =====>] : 
	I0501 04:59:33.365395   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:34.379973   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:36.544520   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:36.544520   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:36.545204   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:39.165327   14140 main.go:141] libmachine: [stdout =====>] : 172.28.222.197
	
	I0501 04:59:39.165327   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:39.166110   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:41.338206   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:41.338206   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:41.338568   14140 machine.go:94] provisionDockerMachine start ...
	I0501 04:59:41.338747   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:43.516737   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:43.516737   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:43.516737   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:46.129969   14140 main.go:141] libmachine: [stdout =====>] : 172.28.222.197
	
	I0501 04:59:46.129969   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:46.136406   14140 main.go:141] libmachine: Using SSH client type: native
	I0501 04:59:46.137150   14140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.197 22 <nil> <nil>}
	I0501 04:59:46.137150   14140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 04:59:46.281896   14140 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0501 04:59:46.281896   14140 buildroot.go:166] provisioning hostname "cert-options-374100"
	I0501 04:59:46.281969   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:48.389874   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:48.389874   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:48.389977   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:50.972480   14140 main.go:141] libmachine: [stdout =====>] : 172.28.222.197
	
	I0501 04:59:50.972480   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:50.979189   14140 main.go:141] libmachine: Using SSH client type: native
	I0501 04:59:50.979189   14140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.197 22 <nil> <nil>}
	I0501 04:59:50.979189   14140 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-374100 && echo "cert-options-374100" | sudo tee /etc/hostname
	I0501 04:59:51.137209   14140 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-374100
	
	I0501 04:59:51.137266   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:56.007475    6468 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4116662s)
	I0501 04:59:56.022139    6468 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0501 04:59:56.094982    6468 out.go:177] 
	W0501 04:59:56.098571    6468 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 01 04:50:37 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.737751575Z" level=info msg="Starting up"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.738767487Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:37.746848778Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=666
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.781270866Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814639343Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814782345Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814868146Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.814902646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.815922757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816066359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816719366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816948569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816976269Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.816989970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.817593876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.818498287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822173428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822643133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.822983337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.823067538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824101850Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824224951Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.824397953Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.827996294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828138495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828183996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828202296Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828218696Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828312797Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828748402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828862604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.828984105Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829023605Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829038306Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829052406Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829065506Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829080006Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829095006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829108906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829127407Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829139607Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829160107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829178007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829192407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829207007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829220408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829234008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829245808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829258508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829272108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829286808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829301909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829392210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829428510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829446310Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829473210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829488211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829503311Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829772114Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829817214Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829833715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829846415Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829922316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.829964316Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830001116Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830468122Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830677224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830824826Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:50:37 kubernetes-upgrade-195400 dockerd[666]: time="2024-05-01T04:50:37.830857126Z" level=info msg="containerd successfully booted in 0.052527s"
	May 01 04:50:38 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:38.809503508Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:50:38 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:38.949752892Z" level=info msg="Loading containers: start."
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.376127664Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.469733044Z" level=info msg="Loading containers: done."
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.498790517Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.499762926Z" level=info msg="Daemon has completed initialization"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.560623799Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:50:39 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:50:39.560755300Z" level=info msg="API listen on [::]:2376"
	May 01 04:50:39 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.350114100Z" level=info msg="Processing signal 'terminated'"
	May 01 04:51:07 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.352882798Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.353987198Z" level=info msg="Daemon shutdown complete"
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.354309097Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:51:07 kubernetes-upgrade-195400 dockerd[660]: time="2024-05-01T04:51:07.354366297Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:51:08 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.435419937Z" level=info msg="Starting up"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.437918636Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:08.442687333Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1134
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.479597914Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512343397Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512395897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512441897Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512460097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512493297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512507597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512750197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512793797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512812597Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512834197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.512861797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.513053497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516269495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516423695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516614395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516771295Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516821095Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516843395Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.516856095Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517079095Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517318495Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517346095Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517364195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517398295Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517451995Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.517882494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518072194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518176094Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518199094Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518215394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518231394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518302194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518326294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518345894Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518378894Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518394194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518408194Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518431094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518472194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518488394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518503394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518517394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518532494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518547494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518573194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518591894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518612294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518676394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518697594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518712994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518741494Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518784094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518883794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518908794Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.518961794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519059994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519079794Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519093694Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519185394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519281894Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519300194Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.519840093Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520002693Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520098793Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:51:08 kubernetes-upgrade-195400 dockerd[1134]: time="2024-05-01T04:51:08.520259193Z" level=info msg="containerd successfully booted in 0.041516s"
	May 01 04:51:09 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:09.679093592Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:51:09 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:09.726281468Z" level=info msg="Loading containers: start."
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.252231895Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.339345050Z" level=info msg="Loading containers: done."
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.362354538Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.362550238Z" level=info msg="Daemon has completed initialization"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.413767912Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:51:10 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:10.414020712Z" level=info msg="API listen on [::]:2376"
	May 01 04:51:10 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:23 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.320729422Z" level=info msg="Processing signal 'terminated'"
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322318821Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322584021Z" level=info msg="Daemon shutdown complete"
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322679921Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:51:23 kubernetes-upgrade-195400 dockerd[1128]: time="2024-05-01T04:51:23.322714120Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:51:24 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.407789758Z" level=info msg="Starting up"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.409499257Z" level=info msg="containerd not running, starting managed containerd"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:24.413531855Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1548
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.447762137Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480301720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480447820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480505020Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480523520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480556820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480571120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480875520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.480977020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481000920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481014520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481044220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.481264720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.484750318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.484899518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485276818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485404218Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485456518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485494918Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485515618Z" level=info msg="metadata content store policy set" policy=shared
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.485974318Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486037317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486059217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486077817Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486095617Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486157717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486405817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486558717Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486779117Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486909017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.486956417Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487081817Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487103517Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487123117Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487140217Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487155517Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487170317Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487183417Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487206017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487221817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487239617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487256817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487271917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487287517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487301117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487315617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487330317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487347717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487362417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487375817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487390117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487407717Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487431317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487447017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487464617Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487609717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487847517Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487866817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487895317Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.487965416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488005916Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488020216Z" level=info msg="NRI interface is disabled by configuration."
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488290216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488786116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488901316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 01 04:51:24 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:24.488947916Z" level=info msg="containerd successfully booted in 0.043088s"
	May 01 04:51:25 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:25.460814712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.441518504Z" level=info msg="Loading containers: start."
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.746797846Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.843216196Z" level=info msg="Loading containers: done."
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.871866881Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.872160181Z" level=info msg="Daemon has completed initialization"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.932792949Z" level=info msg="API listen on /var/run/docker.sock"
	May 01 04:51:26 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:26.932894949Z" level=info msg="API listen on [::]:2376"
	May 01 04:51:26 kubernetes-upgrade-195400 systemd[1]: Started Docker Application Container Engine.
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.413186729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.414235684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.414546271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.415078249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424127568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424610747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.424794440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.425193723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.542976762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.543400544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.545839841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.546397518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.577947289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578034185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578048784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.578159080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.894754445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.894994034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.895186926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:33 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:33.895387118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087263859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087360555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087381154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.087506249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176019851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176225443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176496032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.176787121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.187390202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188528257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188594754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:34 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:34.188768047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.128344007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129249380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129484474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.129738966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197014331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197313923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.197801709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.198518588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240403683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240610677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.240816371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.241779844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879119437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879206734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879239133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:39 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:39.879390529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.037846206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038017206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038129306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.038431606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.177136366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.178119766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.178320766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:40.179146567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:52.319143874Z" level=info msg="ignoring event" container=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321379575Z" level=info msg="shim disconnected" id=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321488875Z" level=warning msg="cleaning up after shim disconnected" id=a702668cb1d99edf14c8b41226934cd835dc40912e2587fb90bd74fd6bc1a56a namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.321504075Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:52.495136752Z" level=info msg="ignoring event" container=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495289052Z" level=info msg="shim disconnected" id=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495438452Z" level=warning msg="cleaning up after shim disconnected" id=76f131fe4f91537b2024fd5ba4f9289632c35b24212599f4a4a2668b2d3a3396 namespace=moby
	May 01 04:51:52 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:52.495517952Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013242982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013356882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013378582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.013491682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448309876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448587876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.448614576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.449788876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.706557891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.706871191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.707251391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.707594791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847005754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847714754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847764954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:53 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:53.847899454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:57.316030045Z" level=info msg="ignoring event" container=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.318951753Z" level=info msg="shim disconnected" id=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.319103754Z" level=warning msg="cleaning up after shim disconnected" id=ba52d8cc065e5df5505bf819ff9b9e1d4f0479d0c5cbd06a90067baf3f4f792e namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.319127354Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501270776Z" level=info msg="shim disconnected" id=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501439176Z" level=warning msg="cleaning up after shim disconnected" id=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:51:57.501465976Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:51:57 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:51:57.503202381Z" level=info msg="ignoring event" container=f8510ef59edf8760427113183218416e1f1af14e46d6086123e80bcc0f19a16b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:52:10.217053199Z" level=info msg="ignoring event" container=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.217810700Z" level=info msg="shim disconnected" id=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 namespace=moby
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.218203700Z" level=warning msg="cleaning up after shim disconnected" id=002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45 namespace=moby
	May 01 04:52:10 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:10.218436801Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.725718036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727251739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727496339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:52:23 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:52:23.727823439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:53:50.826713206Z" level=info msg="ignoring event" container=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829354210Z" level=info msg="shim disconnected" id=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 namespace=moby
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829586110Z" level=warning msg="cleaning up after shim disconnected" id=8b879eb35076792cbf7068b9185b9095b5872d9df0822d9163c94abc91c282c6 namespace=moby
	May 01 04:53:50 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:50.829618410Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.140540023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.144227528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.144466028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:53:51 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:53:51.145094629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:55:40.302862066Z" level=info msg="ignoring event" container=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.303441167Z" level=info msg="shim disconnected" id=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.303934667Z" level=warning msg="cleaning up after shim disconnected" id=ad847c495985ffc62c24a8a880e840cab0a500eb63d342a76d14baae863a2082 namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.304005467Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539544252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539685752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.539908752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:55:40 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:55:40.540327953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773728874Z" level=info msg="shim disconnected" id=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773845074Z" level=warning msg="cleaning up after shim disconnected" id=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:30.773869274Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:57:30 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:57:30.774610675Z" level=info msg="ignoring event" container=5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021150449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021262949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021284349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:57:31 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:57:31.021391949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 01 04:58:44 kubernetes-upgrade-195400 systemd[1]: Stopping Docker Application Container Engine...
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.627843811Z" level=info msg="Processing signal 'terminated'"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.897162901Z" level=info msg="ignoring event" container=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.897513901Z" level=info msg="shim disconnected" id=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.898294202Z" level=warning msg="cleaning up after shim disconnected" id=b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.899367102Z" level=info msg="ignoring event" container=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.900821204Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.899870203Z" level=info msg="shim disconnected" id=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.902192404Z" level=warning msg="cleaning up after shim disconnected" id=44373db87f42bafe57d51ebf6f495bae909356395959fdad2cd9d92e6aa022ed namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.902343805Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.908141309Z" level=info msg="ignoring event" container=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.908581809Z" level=info msg="shim disconnected" id=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.909757610Z" level=warning msg="cleaning up after shim disconnected" id=099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.909868210Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.912899312Z" level=info msg="shim disconnected" id=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.913016012Z" level=warning msg="cleaning up after shim disconnected" id=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.913071912Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.913308012Z" level=info msg="ignoring event" container=3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.927245222Z" level=info msg="ignoring event" container=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932102326Z" level=info msg="shim disconnected" id=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932166426Z" level=warning msg="cleaning up after shim disconnected" id=2a6f3d078ffd0c400477de8f151e16f5998a0af3b07ff8d28d625e3be1812012 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.932177826Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947662136Z" level=info msg="shim disconnected" id=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947851637Z" level=warning msg="cleaning up after shim disconnected" id=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.947991237Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959224745Z" level=info msg="shim disconnected" id=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959700845Z" level=warning msg="cleaning up after shim disconnected" id=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.959887445Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980052959Z" level=info msg="shim disconnected" id=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980291559Z" level=warning msg="cleaning up after shim disconnected" id=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.980364159Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.989875866Z" level=info msg="ignoring event" container=582ab6a8f5d222a955e55ae3bc812564c286e9c73381b0352c0187792261ea13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.989960566Z" level=info msg="ignoring event" container=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.990003666Z" level=info msg="ignoring event" container=f3f3f3452164964cd1db3e00ff78d6a5ca0ce6593a41e92b5f993fe749aade1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:44.990020766Z" level=info msg="ignoring event" container=3d06df12a9bcd17ff10b5b61d78aba629d1433d3d24f0cf8615f5162fbd31247 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.994298969Z" level=info msg="shim disconnected" id=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.995178470Z" level=warning msg="cleaning up after shim disconnected" id=c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de namespace=moby
	May 01 04:58:44 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:44.995325270Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.016571985Z" level=info msg="ignoring event" container=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.017610086Z" level=info msg="ignoring event" container=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.019266687Z" level=info msg="shim disconnected" id=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.024832391Z" level=warning msg="cleaning up after shim disconnected" id=4b2878dcc077d7cf0f29f72f1e01c9da4c4494d30c0effb89b34424561595156 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.035972699Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.022186289Z" level=info msg="shim disconnected" id=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.040723802Z" level=warning msg="cleaning up after shim disconnected" id=b0968731e4aaed49195fa0c394c187045854e9195a1762c7914db8b8f1fd69db namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.040854102Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:45.172240194Z" level=info msg="ignoring event" container=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174360396Z" level=info msg="shim disconnected" id=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174623696Z" level=warning msg="cleaning up after shim disconnected" id=59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51 namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.174703096Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:45 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:45.206821619Z" level=warning msg="cleanup warnings time=\"2024-05-01T04:58:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:49.794608347Z" level=info msg="ignoring event" container=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794869547Z" level=info msg="shim disconnected" id=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794927047Z" level=warning msg="cleaning up after shim disconnected" id=c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145 namespace=moby
	May 01 04:58:49 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:49.794937947Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.741102948Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.791622809Z" level=info msg="ignoring event" container=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792569753Z" level=info msg="shim disconnected" id=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792713060Z" level=warning msg="cleaning up after shim disconnected" id=21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6 namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1548]: time="2024-05-01T04:58:54.792893468Z" level=info msg="cleaning up dead shim" namespace=moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879540419Z" level=info msg="Daemon shutdown complete"
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879629923Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879801131Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 01 04:58:54 kubernetes-upgrade-195400 dockerd[1542]: time="2024-05-01T04:58:54.879848133Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: docker.service: Deactivated successfully.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: docker.service: Consumed 13.841s CPU time.
	May 01 04:58:55 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	May 01 04:58:55 kubernetes-upgrade-195400 dockerd[5745]: time="2024-05-01T04:58:55.976161779Z" level=info msg="Starting up"
	May 01 04:59:56 kubernetes-upgrade-195400 dockerd[5745]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 04:59:56 kubernetes-upgrade-195400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0501 04:59:56.099571    6468 out.go:239] * 
	W0501 04:59:56.101942    6468 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0501 04:59:56.106522    6468 out.go:177] 
	I0501 04:59:53.302014   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:53.302014   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:53.302014   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 04:59:55.930583   14140 main.go:141] libmachine: [stdout =====>] : 172.28.222.197
	
	I0501 04:59:55.930583   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:55.937971   14140 main.go:141] libmachine: Using SSH client type: native
	I0501 04:59:55.937971   14140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x6ea1c0] 0x6ecda0 <nil>  [] 0s} 172.28.222.197 22 <nil> <nil>}
	I0501 04:59:55.937971   14140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-374100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-374100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-374100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 04:59:56.098970   14140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 04:59:56.098970   14140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0501 04:59:56.098970   14140 buildroot.go:174] setting up certificates
	I0501 04:59:56.098970   14140 provision.go:84] configureAuth start
	I0501 04:59:56.098970   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 04:59:58.410680   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:59:58.410680   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:59:58.410680   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 05:00:01.082915   14140 main.go:141] libmachine: [stdout =====>] : 172.28.222.197
	
	I0501 05:00:01.082915   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 05:00:01.082915   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	I0501 05:00:03.331032   14140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 05:00:03.331032   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 05:00:03.331367   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-374100 ).networkadapters[0]).ipaddresses[0]
	I0501 05:00:05.981999   14140 main.go:141] libmachine: [stdout =====>] : 172.28.222.197
	
	I0501 05:00:05.981999   14140 main.go:141] libmachine: [stderr =====>] : 
	I0501 05:00:05.981999   14140 provision.go:143] copyHostCerts
	I0501 05:00:05.983021   14140 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0501 05:00:05.983021   14140 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0501 05:00:05.983408   14140 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0501 05:00:05.984866   14140 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0501 05:00:05.984866   14140 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0501 05:00:05.985116   14140 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0501 05:00:05.986483   14140 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0501 05:00:05.986483   14140 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0501 05:00:05.986483   14140 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0501 05:00:05.987735   14140 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-options-374100 san=[127.0.0.1 172.28.222.197 cert-options-374100 localhost minikube]
	I0501 05:00:06.153604   14140 provision.go:177] copyRemoteCerts
	I0501 05:00:06.168601   14140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 05:00:06.168601   14140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-374100 ).state
	
	
	==> Docker <==
	May 01 05:01:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 01 05:01:56 kubernetes-upgrade-195400 systemd[1]: Failed to start Docker Application Container Engine.
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID 'c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c2357be2231361364fce76ff51b4ae9d1131f6fe78b72703b147c20b015a06de'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID '002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID '002af6c61dad38fdf11efa2b94434473c56d8c09754dde182d8b66817f424c45'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID '21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID '21b5ea540078fc55d925b1f77d5e5bf9d9cf8a14877bd60d798c61ff4ebaa3e6'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID '59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID '59112f4b5921294cf7202582c474f5460c7942d87268034743b6069daa7b9c51'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID '3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3e46b7508aa0fe3f5b71848d9d3af88c939caa79a6bddf468ebfd87c3bf42031'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID '5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5078cbc153bd038685f6e4a7b53c9f40ad1defbcabe87cd81c12a214a66d8e1a'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID 'b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b5117a7b7f02db6847aa9ccd848b816ab792209e8a9ce11cc9ad89c01f863aba'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID 'c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c749b700214b51577cd07fc80e2b035918fd9fc4db94292bcdda73988f7b3145'"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="error getting RW layer size for container ID '099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:01:56 kubernetes-upgrade-195400 cri-dockerd[1349]: time="2024-05-01T05:01:56Z" level=error msg="Set backoffDuration to : 1m0s for container ID '099a9cab2b43676db5a6c3a7547a0a37cb91aaa0d8c7d9493cc49485e74cf4f3'"
	May 01 05:01:56 kubernetes-upgrade-195400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	May 01 05:01:56 kubernetes-upgrade-195400 systemd[1]: Stopped Docker Application Container Engine.
	May 01 05:01:56 kubernetes-upgrade-195400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-01T05:01:58Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.199460] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[May 1 04:51] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.140294] kauditd_printk_skb: 73 callbacks suppressed
	[  +1.334159] systemd-fstab-generator[1094]: Ignoring "noauto" option for root device
	[  +0.247923] systemd-fstab-generator[1106]: Ignoring "noauto" option for root device
	[  +0.264734] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +3.178447] kauditd_printk_skb: 115 callbacks suppressed
	[  +0.240741] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.240101] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.268977] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.338283] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[ +11.698201] systemd-fstab-generator[1534]: Ignoring "noauto" option for root device
	[  +0.122779] kauditd_printk_skb: 80 callbacks suppressed
	[  +4.110457] systemd-fstab-generator[1763]: Ignoring "noauto" option for root device
	[  +4.702079] systemd-fstab-generator[1929]: Ignoring "noauto" option for root device
	[  +0.109646] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.106509] kauditd_printk_skb: 62 callbacks suppressed
	[  +2.138230] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[ +10.908397] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.099013] kauditd_printk_skb: 39 callbacks suppressed
	[May 1 04:58] systemd-fstab-generator[5278]: Ignoring "noauto" option for root device
	[  +0.765745] systemd-fstab-generator[5314]: Ignoring "noauto" option for root device
	[  +0.305485] systemd-fstab-generator[5326]: Ignoring "noauto" option for root device
	[  +0.317243] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[  +5.378673] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 05:02:57 up 13 min,  0 users,  load average: 0.04, 0.38, 0.30
	Linux kubernetes-upgrade-195400 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 01 05:02:51 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:51.404217    1936 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-195400\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-195400?timeout=10s\": dial tcp 172.28.213.192:8443: connect: connection refused"
	May 01 05:02:51 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:51.405539    1936 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-195400\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-195400?timeout=10s\": dial tcp 172.28.213.192:8443: connect: connection refused"
	May 01 05:02:51 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:51.406698    1936 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-195400\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-195400?timeout=10s\": dial tcp 172.28.213.192:8443: connect: connection refused"
	May 01 05:02:51 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:51.407901    1936 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-195400\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-195400?timeout=10s\": dial tcp 172.28.213.192:8443: connect: connection refused"
	May 01 05:02:51 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:51.407944    1936 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 05:02:52 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:52.355926    1936 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.451880113s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	May 01 05:02:52 kubernetes-upgrade-195400 kubelet[1936]: I0501 05:02:52.525895    1936 status_manager.go:853] "Failed to get status for pod" podUID="68397fd63c9449ba486d456ece2dfe1e" pod="kube-system/kube-apiserver-kubernetes-upgrade-195400" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-195400\": dial tcp 172.28.213.192:8443: connect: connection refused"
	May 01 05:02:53 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:53.239145    1936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-195400?timeout=10s\": dial tcp 172.28.213.192:8443: connect: connection refused" interval="7s"
	May 01 05:02:54 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:54.737548    1936 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-kubernetes-upgrade-195400.17cb449d535fa6f6\": dial tcp 172.28.213.192:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-kubernetes-upgrade-195400.17cb449d535fa6f6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-kubernetes-upgrade-195400,UID:68397fd63c9449ba486d456ece2dfe1e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.213.192:8443/readyz\": dial tcp 172.28.213.192:8443: connect: connection refused,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-195400,},FirstTimestamp:2024-05-01 04:58:45.643937
526 +0000 UTC m=+433.439626171,LastTimestamp:2024-05-01 04:58:47.643355233 +0000 UTC m=+435.439043978,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-195400,}"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.816309    1936 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.816464    1936 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.816623    1936 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.820298    1936 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.820386    1936 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.820197    1936 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.821819    1936 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.818205    1936 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.825719    1936 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.825756    1936 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.826140    1936 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.826648    1936 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.826723    1936 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: I0501 05:02:56.826859    1936 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.830979    1936 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 01 05:02:56 kubernetes-upgrade-195400 kubelet[1936]: E0501 05:02:56.831108    1936 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 05:00:09.004319    6380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0501 05:00:56.277557    6380 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:00:56.313124    6380 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:00:56.349753    6380 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:00:56.387543    6380 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:00:56.426483    6380 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:01:56.551999    6380 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:01:56.594208    6380 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0501 05:01:56.642648    6380 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-195400 -n kubernetes-upgrade-195400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-195400 -n kubernetes-upgrade-195400: exit status 2 (12.5974459s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 05:02:57.707727    5852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-195400" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-195400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-195400
E0501 05:03:38.057628   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-195400: (1m3.8863859s)
--- FAIL: TestKubernetesUpgrade (1600.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (10800.526s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-377300 --alsologtostderr -v=1 --driver=hyperv
E0501 05:08:38.052746   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (31m23s)
	TestNetworkPlugins/group/auto (2m42s)
	TestNetworkPlugins/group/auto/Start (2m42s)
	TestNoKubernetes (4m42s)
	TestNoKubernetes/serial (4m42s)
	TestNoKubernetes/serial/StartWithK8s (4m41s)
	TestPause (5m59s)
	TestPause/serial (5m59s)
	TestPause/serial/SecondStartNoReconfiguration (43s)
	TestRunningBinaryUpgrade (10m38s)
	TestStartStop (10m38s)

                                                
                                                
goroutine 2255 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005ccea0, 0xc0008c3bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000886540, {0x4fdd540, 0x2a, 0x2a}, {0x2ca8526?, 0xae806f?, 0x5000760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0009e1ae0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0009e1ae0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 50 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000071080)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 172 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008f7490, 0x3c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2744be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00205c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008f74c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000632fa0, {0x3c123a0, 0xc002045560}, 0x1, 0xc000830180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000632fa0, 0x3b9aca00, 0x0, 0x1, 0xc000830180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 23 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 42
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2264 [syscall, locked to thread]:
syscall.SyscallN(0x72743a74656c6562?, {0xc002547b20?, 0x72743a7964616572?, 0x61772d2d2065736c?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x90000c002547b4d?, 0xc002547b80?, 0xa3fdd6?, 0x508dbc0?, 0xc002547c08?, 0xa32985?, 0x1ae40620598?, 0x3c35a77?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x63c, {0xc0008c0779?, 0x1887, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00205ac88?, {0xc0008c0779?, 0xa6c1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00205ac88, {0xc0008c0779, 0x1887, 0x1887})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000809008, {0xc0008c0779?, 0xc002547d98?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00211e180, {0x3c10f60, 0xc0003630b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc00211e180}, {0x3c10f60, 0xc0003630b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c110a0, 0xc00211e180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc00211e180?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc00211e180}, {0x3c11020, 0xc000809008}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0001f8600?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2207
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 173 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c35da0, 0xc000830180}, 0xc000ad5f50, 0xc000ad5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3c35da0, 0xc000830180}, 0x90?, 0xc000ad5f50, 0xc000ad5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c35da0?, 0xc000830180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000ad5fd0?, 0xbbe404?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2263 [syscall, locked to thread]:
syscall.SyscallN(0x10?, {0xc003733b20?, 0xa47ea5?, 0x508dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc003733b88?, 0xc003733b80?, 0xa3fdd6?, 0x508dbc0?, 0xc003733c08?, 0xa32985?, 0x1ae40620108?, 0xc000050641?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5cc, {0xc000ab01e7?, 0x219, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00205a788?, {0xc000ab01e7?, 0xc003733d78?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00205a788, {0xc000ab01e7, 0x219, 0x219})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008081e0, {0xc000ab01e7?, 0x0?, 0x68?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00211e150, {0x3c10f60, 0xc000a1a020})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc00211e150}, {0x3c10f60, 0xc000a1a020}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x4f038e0?, {0x3c110a0, 0xc00211e150})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc00211e150?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc00211e150}, {0x3c11020, 0xc0008081e0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x36baff8?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2207
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2035 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000837380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000837380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000837380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000837380, 0xc00075e400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2247 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffa5bd84de0?, {0xc0020a1a98?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x624, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00283f2f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00086bce0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00086bce0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000837520, 0xc00086bce0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartWithK8S({0x3c35be0, 0xc0006303f0}, 0xc000837520, {0xc0027a88b8, 0x13})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:95 +0x217
k8s.io/minikube/test/integration.TestNoKubernetes.func1.1(0xc000837520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:68 +0x43
testing.tRunner(0xc000837520, 0xc00289c700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2258
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 174 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2206 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffa5bd84de0?, {0xc00372dbd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7e0, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00283ede0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a90420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a90420)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002284b60, 0xc000a90420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc002284b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc002284b60, 0xc000a66270)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1901
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 143 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00205c120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 131
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 144 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008f74c0, 0xc000830180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 131
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2207 [syscall, locked to thread]:
syscall.SyscallN(0x7ffa5bd84de0?, {0xc000aefa10?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6b4, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002ba4600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a902c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a902c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002284820, 0xc000a902c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStartNoReconfigure({0x3c35be0, 0xc000630310}, 0xc002284820, {0xc00075a5f0?, 0xc014ab2660?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:92 +0x245
k8s.io/minikube/test/integration.TestPause.func1.1(0xc002284820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc002284820, 0xc000a74080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2240
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2144 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005cd380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005cd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005cd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0005cd380, 0xc00289c5c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2138
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2252 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xc000a84410?, {0xc000af5b20?, 0xa47ea5?, 0x508dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0026ec501?, 0xc000af5b80?, 0xa3fdd6?, 0x508dbc0?, 0xc000af5c08?, 0xa3281b?, 0xa28ba6?, 0xc0027c5f80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5e0, {0xc000ab19e6?, 0x21a, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00205a008?, {0xc000ab19e6?, 0xa6c1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00205a008, {0xc000ab19e6, 0x21a, 0x21a})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000a1a0a0, {0xc000ab19e6?, 0xc0020a5340?, 0x67?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a66330, {0x3c10f60, 0xc0006f41e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc000a66330}, {0x3c10f60, 0xc0006f41e0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000af5e78?, {0x3c110a0, 0xc000a66330})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc000a66330?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc000a66330}, {0x3c11020, 0xc000a1a0a0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000054ea0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2206
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1051 [chan send, 149 minutes]:
os/exec.(*Cmd).watchCtx(0xc002b3c160, 0xc002b38480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1050
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 886 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002062b90, 0x36)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x2744be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ffb200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002062bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002c3df10, {0x3c123a0, 0xc002045cb0}, 0x1, 0xc000830180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002c3df10, 0x3b9aca00, 0x0, 0x1, 0xc000830180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2153 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002885b20?, 0xa47ea5?, 0x508dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002885b41?, 0xc002885b80?, 0xa3fdd6?, 0x508dbc0?, 0xc002885c08?, 0xa32985?, 0x1ae40620598?, 0xc000ae7f77?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3c4, {0xc0020f8211?, 0x1def, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002963408?, {0xc0020f8211?, 0xa6c1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002963408, {0xc0020f8211, 0x1def, 0x1def})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008090f8, {0xc0020f8211?, 0xc002885d98?, 0x1e39?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009cdc20, {0x3c10f60, 0xc0006f43b8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc0009cdc20}, {0x3c10f60, 0xc0006f43b8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c110a0, 0xc0009cdc20})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc0009cdc20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc0009cdc20}, {0x3c11020, 0xc0008090f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b8ee40?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2020
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1905 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000837040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000837040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000837040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000837040, 0xc00075e300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2254 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a90420, 0xc000a181e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2206
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1945 [chan receive, 32 minutes]:
testing.(*T).Run(0xc002284000, {0x2c4c9f1?, 0xa9f48d?}, 0xc00011a150)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002284000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002284000, 0x36bb098)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2034 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008371e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008371e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008371e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008371e0, 0xc00075e380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1243 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc002425760, 0xc002b39b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 845
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 726 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x1ae65e6d7a8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xa3fdd6?, 0x508dbc0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc002160ca0, 0xc002b99bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc002160c88, 0x268, {0xc00078b0e0?, 0x0?, 0x2000000000?}, 0xc000680008?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc002160c88, 0xc002b99d90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc002160c88)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00010fd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00010fd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009420f0, {0x3c28e40, 0xc00010fd00})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009420f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0004eeea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 723
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 887 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c35da0, 0xc000830180}, 0xc002bc5f50, 0xc002bc5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3c35da0, 0xc000830180}, 0xa0?, 0xc002bc5f50, 0xc002bc5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c35da0?, 0xc000830180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002bc5fd0?, 0xbbe404?, 0xc0000559e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2251 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xc000a99000?, {0xc002bbfb20?, 0xa47ea5?, 0x508dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x4d?, 0xc002bbfb80?, 0xa3fdd6?, 0x508dbc0?, 0xc002bbfc08?, 0xa32985?, 0x1ae40620a28?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x610, {0xc0024ec26f?, 0x591, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002962f08?, {0xc0024ec26f?, 0xa6c1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002962f08, {0xc0024ec26f, 0x591, 0x591})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008090d0, {0xc0024ec26f?, 0xc002bbfd98?, 0x20c?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009cdbf0, {0x3c10f60, 0xc000809118})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc0009cdbf0}, {0x3c10f60, 0xc000809118}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c110a0, 0xc0009cdbf0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc0009cdbf0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc0009cdbf0}, {0x3c11020, 0xc0008090d0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2020
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 877 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ffb560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 798
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2154 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000a90160, 0xc000054d20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2020
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 888 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 887
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 878 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002062bc0, 0xc000830180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 798
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2253 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0023f9b20?, 0xa47ea5?, 0x508dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xa32c41?, 0xc0023f9b80?, 0xa3fdd6?, 0x508dbc0?, 0xc0023f9c08?, 0xa32985?, 0x1ae40620eb8?, 0xc0023f9d67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x648, {0xc0008bdd25?, 0x2db, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc00205a508?, {0xc0008bdd25?, 0x0?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00205a508, {0xc0008bdd25, 0x2db, 0x2db})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000a1a0f8, {0xc0008bdd25?, 0xc0023f9d30?, 0xe16?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc000a66360, {0x3c10f60, 0xc000a1a140})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc000a66360}, {0x3c10f60, 0xc000a1a140}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0023f9e78?, {0x3c110a0, 0xc000a66360})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc000a66360?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc000a66360}, {0x3c11020, 0xc000a1a0f8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0000545a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2206
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2143 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005cc4e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005cc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005cc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0005cc4e0, 0xc00289c540)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2138
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2249 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0023f5b20?, 0xa47ea5?, 0x508dbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x2000?, 0xc0023f5b80?, 0xa3fdd6?, 0x508dbc0?, 0xc0023f5c08?, 0xa3281b?, 0x1ae40620eb8?, 0xc000ae7f35?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7ac, {0xc000ab0d3a?, 0x2c6, 0xc000ab0c00?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002962288?, {0xc000ab0d3a?, 0xa6c1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002962288, {0xc000ab0d3a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0008090a8, {0xc000ab0d3a?, 0xc002e74540?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009cd800, {0x3c10f60, 0xc0006f4380})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc0009cd800}, {0x3c10f60, 0xc0006f4380}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0023f5e78?, {0x3c110a0, 0xc0009cd800})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc0009cd800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc0009cd800}, {0x3c11020, 0xc0008090a8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000a18360?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2247
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2139 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000837860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000837860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000837860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000837860, 0xc00289c440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2138
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2248 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002413b20?, 0xc0001f8738?, 0xc002413b60?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002413ba8?, 0xb808f5?, 0x13?, 0xc0023fe000?, 0xc002413c08?, 0xa3281b?, 0xa28ba6?, 0x10?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x638, {0xc0024ed23d?, 0x5c3, 0xae417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc002289908?, {0xc0024ed23d?, 0xa6c1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002289908, {0xc0024ed23d, 0x5c3, 0x5c3})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000809038, {0xc0024ed23d?, 0xc002413d98?, 0x23c?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0009cd7d0, {0x3c10f60, 0xc0003631b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c110a0, 0xc0009cd7d0}, {0x3c10f60, 0xc0003631b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xa98277?, {0x3c110a0, 0xc0009cd7d0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4f91840?, {0x3c110a0?, 0xc0009cd7d0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3c110a0, 0xc0009cd7d0}, {0x3c11020, 0xc000809038}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002b38240?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2247
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2142 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000837d40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000837d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000837d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000837d40, 0xc00289c500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2138
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1946 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0022841a0, {0x2c4def5?, 0x45d964b800?}, 0xc00211e2a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0022841a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:44 +0x1a5
testing.tRunner(0xc0022841a0, 0x36bb0a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2258 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0022849c0, {0x2c5b9e2?, 0x3d4d4ad?}, 0xc00289c700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNoKubernetes.func1(0xc0022849c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:67 +0x218
testing.tRunner(0xc0022849c0, 0xc00211e2a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1946
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2018 [chan receive, 11 minutes]:
testing.(*T).Run(0xc002284d00, {0x2c4c9f1?, 0xb77333?}, 0x36bb2b8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc002284d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc002284d00, 0x36bb0e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1901 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0008364e0, {0x2c4c9f6?, 0x3c0af28?}, 0xc000a66270)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008364e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0008364e0, 0xc00075e000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1947 [chan receive, 6 minutes]:
testing.(*T).Run(0xc002284680, {0x2c4def5?, 0xd18c2e2800?}, 0xc0009cd050)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc002284680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc002284680, 0x36bb0b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2141 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000837ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000837ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000837ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000837ba0, 0xc00289c4c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2138
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1900 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008361a0, 0xc00011a150)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1945
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2138 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0008376c0, 0x36bb2b8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2018
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1904 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000836ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000836ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000836ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000836ea0, 0xc00075e200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1903 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000836d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000836d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000836d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000836d00, 0xc00075e100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1902 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008369c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008369c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0008369c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0008369c0, 0xc00075e080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1980 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005cc9c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005cc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005cc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0005cc9c0, 0xc0006ad080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2020 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffa5bd84de0?, {0xc000aeb960?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x600, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00283f920)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000a90160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000a90160)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002285040, 0xc000a90160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc002285040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc002285040, 0x36bb0c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1981 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005ccd00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005ccd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005ccd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0005ccd00, 0xc0006ad100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1900
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2240 [chan receive]:
testing.(*T).Run(0xc0005cd860, {0x2c8b8f2?, 0x63?}, 0xc000a74080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc0005cd860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc0005cd860, 0xc0009cd050)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1947
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2250 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc00086bce0, 0xc002760360)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2247
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2265 [select]:
os/exec.(*Cmd).watchCtx(0xc000a902c0, 0xc000a18000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2207
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2140 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000941130)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000837a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000837a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000837a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000837a00, 0xc00289c480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2138
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (153/201)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.23
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.36
9 TestDownloadOnly/v1.20.0/DeleteAll 1.3
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.27
12 TestDownloadOnly/v1.30.0/json-events 11.63
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.32
18 TestDownloadOnly/v1.30.0/DeleteAll 1.31
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.24
21 TestBinaryMirror 7.45
22 TestOffline 290.73
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.31
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.33
27 TestAddons/Setup 414
30 TestAddons/parallel/Ingress 70.24
31 TestAddons/parallel/InspektorGadget 28.05
32 TestAddons/parallel/MetricsServer 22.78
33 TestAddons/parallel/HelmTiller 38.94
35 TestAddons/parallel/CSI 109.45
36 TestAddons/parallel/Headlamp 41.63
37 TestAddons/parallel/CloudSpanner 21.48
38 TestAddons/parallel/LocalPath 32.79
39 TestAddons/parallel/NvidiaDevicePlugin 21.38
40 TestAddons/parallel/Yakd 5.02
43 TestAddons/serial/GCPAuth/Namespaces 0.36
44 TestAddons/StoppedEnableDisable 55.25
45 TestCertOptions 490.78
46 TestCertExpiration 902.92
47 TestDockerFlags 662.44
48 TestForceSystemdFlag 563.2
49 TestForceSystemdEnv 526.77
56 TestErrorSpam/start 17.88
57 TestErrorSpam/status 38
58 TestErrorSpam/pause 23.48
59 TestErrorSpam/unpause 23.73
60 TestErrorSpam/stop 56.97
63 TestFunctional/serial/CopySyncFile 0.04
64 TestFunctional/serial/StartWithProxy 244.15
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 128.24
67 TestFunctional/serial/KubeContext 0.14
68 TestFunctional/serial/KubectlGetPods 0.26
71 TestFunctional/serial/CacheCmd/cache/add_remote 26.87
72 TestFunctional/serial/CacheCmd/cache/add_local 11.41
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.3
74 TestFunctional/serial/CacheCmd/cache/list 0.29
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.52
76 TestFunctional/serial/CacheCmd/cache/cache_reload 36.81
77 TestFunctional/serial/CacheCmd/cache/delete 0.59
78 TestFunctional/serial/MinikubeKubectlCmd 0.59
80 TestFunctional/serial/ExtraConfig 127.11
81 TestFunctional/serial/ComponentHealth 0.19
82 TestFunctional/serial/LogsCmd 8.77
83 TestFunctional/serial/LogsFileCmd 10.91
84 TestFunctional/serial/InvalidService 21.58
90 TestFunctional/parallel/StatusCmd 43.61
94 TestFunctional/parallel/ServiceCmdConnect 30.39
95 TestFunctional/parallel/AddonsCmd 0.84
96 TestFunctional/parallel/PersistentVolumeClaim 42.7
98 TestFunctional/parallel/SSHCmd 24.11
99 TestFunctional/parallel/CpCmd 59.97
100 TestFunctional/parallel/MySQL 67.78
101 TestFunctional/parallel/FileSync 10.05
102 TestFunctional/parallel/CertSync 64.34
106 TestFunctional/parallel/NodeLabels 0.18
108 TestFunctional/parallel/NonActiveRuntimeDisabled 11.59
110 TestFunctional/parallel/License 3.65
111 TestFunctional/parallel/ServiceCmd/DeployApp 18.42
112 TestFunctional/parallel/Version/short 0.28
113 TestFunctional/parallel/Version/components 8.46
114 TestFunctional/parallel/ImageCommands/ImageListShort 7.97
115 TestFunctional/parallel/ImageCommands/ImageListTable 7.48
116 TestFunctional/parallel/ImageCommands/ImageListJson 7.41
117 TestFunctional/parallel/ImageCommands/ImageListYaml 7.67
118 TestFunctional/parallel/ImageCommands/ImageBuild 29.55
119 TestFunctional/parallel/ImageCommands/Setup 4.76
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.41
121 TestFunctional/parallel/ServiceCmd/List 13.59
122 TestFunctional/parallel/ServiceCmd/JSONOutput 13.45
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.16
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 30.9
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.18
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.91
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.6
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
140 TestFunctional/parallel/ProfileCmd/profile_not_create 12.33
141 TestFunctional/parallel/ImageCommands/ImageRemove 17.59
142 TestFunctional/parallel/ProfileCmd/profile_list 12.13
143 TestFunctional/parallel/ProfileCmd/profile_json_output 11.92
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 20.07
145 TestFunctional/parallel/DockerEnv/powershell 48.2
146 TestFunctional/parallel/UpdateContextCmd/no_changes 2.94
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.5
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.47
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.11
150 TestFunctional/delete_addon-resizer_images 0.5
151 TestFunctional/delete_my-image_image 0.18
152 TestFunctional/delete_minikube_cached_images 0.19
156 TestMultiControlPlane/serial/StartCluster 719.06
157 TestMultiControlPlane/serial/DeployApp 12.21
160 TestMultiControlPlane/serial/NodeLabels 0.19
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.02
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 21.3
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 28.78
170 TestImageBuild/serial/Setup 200.06
171 TestImageBuild/serial/NormalBuild 9.81
172 TestImageBuild/serial/BuildWithBuildArg 9.12
173 TestImageBuild/serial/BuildWithDockerIgnore 7.8
174 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.62
178 TestJSONOutput/start/Command 243.69
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 8.01
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 7.93
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 40.52
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 1.57
206 TestMainNoArgs 0.29
207 TestMinikubeProfile 529.34
210 TestMountStart/serial/StartWithMountFirst 156.38
211 TestMountStart/serial/VerifyMountFirst 9.59
212 TestMountStart/serial/StartWithMountSecond 157.76
213 TestMountStart/serial/VerifyMountSecond 9.67
214 TestMountStart/serial/DeleteFirst 27.56
215 TestMountStart/serial/VerifyMountPostDelete 9.55
216 TestMountStart/serial/Stop 30.29
217 TestMountStart/serial/RestartStopped 118.4
218 TestMountStart/serial/VerifyMountPostStop 9.41
221 TestMultiNode/serial/FreshStart2Nodes 428.94
222 TestMultiNode/serial/DeployApp2Nodes 9.04
224 TestMultiNode/serial/AddNode 231.73
225 TestMultiNode/serial/MultiNodeLabels 0.18
226 TestMultiNode/serial/ProfileList 9.84
227 TestMultiNode/serial/CopyFile 362
228 TestMultiNode/serial/StopNode 77.45
229 TestMultiNode/serial/StartAfterStop 185.84
235 TestScheduledStopWindows 331.39
244 TestStoppedBinaryUpgrade/Setup 0.68
256 TestStoppedBinaryUpgrade/Upgrade 973.81
257 TestStoppedBinaryUpgrade/MinikubeLogs 9.77
x
+
TestDownloadOnly/v1.20.0/json-events (17.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-146700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-146700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.2290205s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-146700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-146700: exit status 85 (354.8414ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-146700 | minikube6\jenkins | v1.33.0 | 01 May 24 02:08 UTC |          |
	|         | -p download-only-146700        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:08:55
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:08:55.868310    1660 out.go:291] Setting OutFile to fd 620 ...
	I0501 02:08:55.869439    1660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:08:55.869439    1660 out.go:304] Setting ErrFile to fd 624...
	I0501 02:08:55.869439    1660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0501 02:08:55.884931    1660 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0501 02:08:55.898478    1660 out.go:298] Setting JSON to true
	I0501 02:08:55.913649    1660 start.go:129] hostinfo: {"hostname":"minikube6","uptime":102390,"bootTime":1714426945,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:08:55.913931    1660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:08:55.920707    1660 out.go:97] [download-only-146700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:08:55.920951    1660 notify.go:220] Checking for updates...
	I0501 02:08:55.924023    1660 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	W0501 02:08:55.921159    1660 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0501 02:08:55.928835    1660 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:08:55.931812    1660 out.go:169] MINIKUBE_LOCATION=18779
	I0501 02:08:55.934091    1660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0501 02:08:55.943385    1660 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0501 02:08:55.944973    1660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:09:01.492856    1660 out.go:97] Using the hyperv driver based on user configuration
	I0501 02:09:01.492856    1660 start.go:297] selected driver: hyperv
	I0501 02:09:01.492856    1660 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:09:01.493399    1660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:09:01.547139    1660 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0501 02:09:01.548232    1660 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 02:09:01.548232    1660 cni.go:84] Creating CNI manager for ""
	I0501 02:09:01.548232    1660 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0501 02:09:01.549256    1660 start.go:340] cluster config:
	{Name:download-only-146700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-146700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:09:01.549256    1660 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:09:01.553346    1660 out.go:97] Downloading VM boot image ...
	I0501 02:09:01.553608    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:09:05.383556    1660 out.go:97] Starting "download-only-146700" primary control-plane node in "download-only-146700" cluster
	I0501 02:09:05.383857    1660 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0501 02:09:05.424803    1660 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0501 02:09:05.425407    1660 cache.go:56] Caching tarball of preloaded images
	I0501 02:09:05.425843    1660 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0501 02:09:05.428946    1660 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0501 02:09:05.429146    1660 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0501 02:09:05.501985    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0501 02:09:09.336712    1660 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0501 02:09:09.337786    1660 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0501 02:09:10.439398    1660 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0501 02:09:10.440577    1660 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-146700\config.json ...
	I0501 02:09:10.447432    1660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-146700\config.json: {Name:mk244311f568ae54054f3036646ccc2c343cdc6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:09:10.448051    1660 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0501 02:09:10.450042    1660 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-146700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-146700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:09:13.124900   10008 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2979856s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-146700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-146700: (1.2689412s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (11.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-379800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-379800 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (11.6316568s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (11.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-379800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-379800: exit status 85 (315.691ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-146700 | minikube6\jenkins | v1.33.0 | 01 May 24 02:08 UTC |                     |
	|         | -p download-only-146700        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| delete  | -p download-only-146700        | download-only-146700 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC | 01 May 24 02:09 UTC |
	| start   | -o=json --download-only        | download-only-379800 | minikube6\jenkins | v1.33.0 | 01 May 24 02:09 UTC |                     |
	|         | -p download-only-379800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:09:16
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:09:16.118960    7132 out.go:291] Setting OutFile to fd 776 ...
	I0501 02:09:16.120337    7132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:09:16.120337    7132 out.go:304] Setting ErrFile to fd 780...
	I0501 02:09:16.120337    7132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:09:16.147436    7132 out.go:298] Setting JSON to true
	I0501 02:09:16.151393    7132 start.go:129] hostinfo: {"hostname":"minikube6","uptime":102410,"bootTime":1714426945,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:09:16.151393    7132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:09:16.245191    7132 out.go:97] [download-only-379800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:09:16.246165    7132 notify.go:220] Checking for updates...
	I0501 02:09:16.252124    7132 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:09:16.259691    7132 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:09:16.263105    7132 out.go:169] MINIKUBE_LOCATION=18779
	I0501 02:09:16.265541    7132 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0501 02:09:16.270814    7132 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0501 02:09:16.272471    7132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:09:21.882493    7132 out.go:97] Using the hyperv driver based on user configuration
	I0501 02:09:21.882493    7132 start.go:297] selected driver: hyperv
	I0501 02:09:21.882493    7132 start.go:901] validating driver "hyperv" against <nil>
	I0501 02:09:21.883174    7132 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:09:21.937440    7132 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0501 02:09:21.938730    7132 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 02:09:21.938954    7132 cni.go:84] Creating CNI manager for ""
	I0501 02:09:21.939040    7132 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0501 02:09:21.939188    7132 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:09:21.939188    7132 start.go:340] cluster config:
	{Name:download-only-379800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-379800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:09:21.939739    7132 iso.go:125] acquiring lock: {Name:mkc5178610d1c169635b8b232f2713c359020679 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:09:21.943838    7132 out.go:97] Starting "download-only-379800" primary control-plane node in "download-only-379800" cluster
	I0501 02:09:21.943838    7132 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:09:21.982081    7132 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:09:21.982081    7132 cache.go:56] Caching tarball of preloaded images
	I0501 02:09:21.982602    7132 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0501 02:09:21.985561    7132 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0501 02:09:21.985561    7132 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0501 02:09:22.065861    7132 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0501 02:09:25.261303    7132 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0501 02:09:25.261819    7132 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-379800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-379800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:09:27.667188    4912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3138007s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-379800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-379800: (1.2428497s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.24s)

                                                
                                    
x
+
TestBinaryMirror (7.45s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-376000 --alsologtostderr --binary-mirror http://127.0.0.1:59966 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-376000 --alsologtostderr --binary-mirror http://127.0.0.1:59966 --driver=hyperv: (6.4991045s)
helpers_test.go:175: Cleaning up "binary-mirror-376000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-376000
--- PASS: TestBinaryMirror (7.45s)

                                                
                                    
x
+
TestOffline (290.73s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-120700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-120700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m9.4115077s)
helpers_test.go:175: Cleaning up "offline-docker-120700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-120700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-120700: (41.3218083s)
--- PASS: TestOffline (290.73s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-286100
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-286100: exit status 85 (310.5302ms)

                                                
                                                
-- stdout --
	* Profile "addons-286100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-286100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:09:40.742915   13660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.33s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-286100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-286100: exit status 85 (328.5539ms)

                                                
                                                
-- stdout --
	* Profile "addons-286100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-286100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:09:40.742029    6568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.33s)

                                                
                                    
x
+
TestAddons/Setup (414s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-286100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-286100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m54.0006705s)
--- PASS: TestAddons/Setup (414.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (70.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-286100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-286100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-286100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [149f4c5c-8976-45a7-8bc4-53d888231e6d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [149f4c5c-8976-45a7-8bc4-53d888231e6d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.0055541s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.4933653s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-286100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0501 02:18:06.179260    4344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-286100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 ip: (2.826311s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.28.215.237
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable ingress-dns --alsologtostderr -v=1: (15.541598s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable ingress --alsologtostderr -v=1: (22.1458147s)
--- PASS: TestAddons/parallel/Ingress (70.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (28.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xh7x6" [a8136f47-e4b0-4e6b-9c96-9caaae6baebd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0114012s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-286100
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-286100: (23.0388857s)
--- PASS: TestAddons/parallel/InspektorGadget (28.05s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 21.0663ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-vzwxj" [d2949e26-7e88-45f4-a7c2-c5aaffe4beb8] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0155015s
addons_test.go:415: (dbg) Run:  kubectl --context addons-286100 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable metrics-server --alsologtostderr -v=1: (16.5525168s)
--- PASS: TestAddons/parallel/MetricsServer (22.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (38.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 5.9843ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-jlfgj" [0cfc799e-c246-4fd2-adca-3f30f53bb411] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0163877s
addons_test.go:473: (dbg) Run:  kubectl --context addons-286100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-286100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (17.2509695s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable helm-tiller --alsologtostderr -v=1: (16.6447039s)
--- PASS: TestAddons/parallel/HelmTiller (38.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (109.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.8502ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-286100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-286100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6e2c6d2e-b904-409d-96c3-f878597af2bd] Pending
helpers_test.go:344: "task-pv-pod" [6e2c6d2e-b904-409d-96c3-f878597af2bd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6e2c6d2e-b904-409d-96c3-f878597af2bd] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.0233813s
addons_test.go:584: (dbg) Run:  kubectl --context addons-286100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-286100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-286100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-286100 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-286100 delete pod task-pv-pod: (1.114618s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-286100 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-286100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-286100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c126dbdd-84c9-4a0b-b8eb-e48c36f4189d] Pending
helpers_test.go:344: "task-pv-pod-restore" [c126dbdd-84c9-4a0b-b8eb-e48c36f4189d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c126dbdd-84c9-4a0b-b8eb-e48c36f4189d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0200205s
addons_test.go:626: (dbg) Run:  kubectl --context addons-286100 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-286100 delete pod task-pv-pod-restore: (1.2817936s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-286100 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-286100 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.9265423s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable volumesnapshots --alsologtostderr -v=1: (17.1060666s)
--- PASS: TestAddons/parallel/CSI (109.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-286100 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-286100 --alsologtostderr -v=1: (17.6165718s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-bbksh" [3265307c-fb3c-41bf-9da0-4705d2817ecb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-bbksh" [3265307c-fb3c-41bf-9da0-4705d2817ecb] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 24.0145365s
--- PASS: TestAddons/parallel/Headlamp (41.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-d5svd" [57b9402e-c4da-463c-bfd2-aed7d4fe5cdb] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0150904s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-286100
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-286100: (16.4484539s)
--- PASS: TestAddons/parallel/CloudSpanner (21.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-286100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-286100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7cd0dc13-2c89-4f25-9e55-835f634a1313] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7cd0dc13-2c89-4f25-9e55-835f634a1313] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7cd0dc13-2c89-4f25-9e55-835f634a1313] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0203914s
addons_test.go:891: (dbg) Run:  kubectl --context addons-286100 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 ssh "cat /opt/local-path-provisioner/pvc-0464956f-1861-4caa-83a8-1de4d13a8aba_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 ssh "cat /opt/local-path-provisioner/pvc-0464956f-1861-4caa-83a8-1de4d13a8aba_default_test-pvc/file1": (11.1002327s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-286100 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-286100 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-286100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-286100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.8172084s)
--- PASS: TestAddons/parallel/LocalPath (32.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.38s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t2wt6" [8d3cb0b0-5f2a-433f-a53b-56986c0857e6] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0127251s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-286100
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-286100: (16.3654669s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.38s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-5g6gc" [c78e0c11-0f49-4d63-91be-983e65451f61] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0164508s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-286100 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-286100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (55.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-286100
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-286100: (42.1019488s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-286100
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-286100: (5.1377278s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-286100
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-286100: (5.1094473s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-286100
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-286100: (2.9012164s)
--- PASS: TestAddons/StoppedEnableDisable (55.25s)

                                                
                                    
x
+
TestCertOptions (490.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-374100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-374100 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m3.6986061s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-374100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-374100 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.1856599s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-374100 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-374100 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-374100 -- "sudo cat /etc/kubernetes/admin.conf": (9.895222s)
helpers_test.go:175: Cleaning up "cert-options-374100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-374100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-374100: (46.8514281s)
--- PASS: TestCertOptions (490.78s)

                                                
                                    
x
+
TestCertExpiration (902.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-386600 --memory=2048 --cert-expiration=3m --driver=hyperv
E0501 04:51:18.286456   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 04:51:35.017392   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-386600 --memory=2048 --cert-expiration=3m --driver=hyperv: (7m24.908479s)
E0501 04:58:38.043663   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-386600 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0501 05:01:41.318114   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-386600 --memory=2048 --cert-expiration=8760h --driver=hyperv: (3m55.9387802s)
helpers_test.go:175: Cleaning up "cert-expiration-386600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-386600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-386600: (42.0629536s)
--- PASS: TestCertExpiration (902.92s)

                                                
                                    
x
+
TestDockerFlags (662.44s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-390200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0501 04:38:38.049872   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 04:41:35.004168   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-390200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (9m54.2828955s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-390200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-390200 ssh "sudo systemctl show docker --property=Environment --no-pager": (9.9371818s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-390200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-390200 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.8676956s)
helpers_test.go:175: Cleaning up "docker-flags-390200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-390200
E0501 04:48:38.054150   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-390200: (48.352024s)
--- PASS: TestDockerFlags (662.44s)

                                                
                                    
x
+
TestForceSystemdFlag (563.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-122500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-122500 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (7m12.9706598s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-122500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-122500 ssh "docker info --format {{.CgroupDriver}}": (10.1759202s)
helpers_test.go:175: Cleaning up "force-systemd-flag-122500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-122500
E0501 04:56:35.011328   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p force-systemd-flag-122500: exit status 1 (2m0.0495205s)

                                                
                                                
-- stdout --
	* Stopping node "force-systemd-flag-122500"  ...
	* Powering off "force-systemd-flag-122500" via SSH ...
	* Deleting "force-systemd-flag-122500" in hyperv ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:56:17.645905   14220 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:180: failed cleanup: exit status 1
--- PASS: TestForceSystemdFlag (563.20s)

                                                
                                    
x
+
TestForceSystemdEnv (526.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-005100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-005100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (7m49.3321662s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-005100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-005100 ssh "docker info --format {{.CgroupDriver}}": (9.712791s)
helpers_test.go:175: Cleaning up "force-systemd-env-005100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-005100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-005100: (47.718996s)
--- PASS: TestForceSystemdEnv (526.77s)

                                                
                                    
x
+
TestErrorSpam/start (17.88s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 start --dry-run: (5.8632452s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 start --dry-run: (5.9706738s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 start --dry-run: (6.0386744s)
--- PASS: TestErrorSpam/start (17.88s)

                                                
                                    
x
+
TestErrorSpam/status (38s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 status: (12.9783208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 status: (12.6510608s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 status: (12.3713322s)
--- PASS: TestErrorSpam/status (38.00s)

                                                
                                    
x
+
TestErrorSpam/pause (23.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 pause: (7.9419198s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 pause: (7.7988454s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 pause: (7.7338029s)
--- PASS: TestErrorSpam/pause (23.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 unpause: (7.9647394s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 unpause: (7.9194552s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 unpause: (7.8467806s)
--- PASS: TestErrorSpam/unpause (23.73s)

                                                
                                    
x
+
TestErrorSpam/stop (56.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 stop
E0501 02:26:34.949832   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 stop: (34.8011046s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 stop
E0501 02:27:02.804712   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 stop: (11.2076368s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-085300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-085300 stop: (10.9589331s)
--- PASS: TestErrorSpam/stop (56.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\14288\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (244.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-869300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0501 02:31:34.952629   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-869300 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m4.1319639s)
--- PASS: TestFunctional/serial/StartWithProxy (244.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (128.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-869300 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-869300 --alsologtostderr -v=8: (2m8.2402848s)
functional_test.go:659: soft start took 2m8.2420547s for "functional-869300" cluster.
--- PASS: TestFunctional/serial/SoftStart (128.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-869300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cache add registry.k8s.io/pause:3.1: (9.3320237s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cache add registry.k8s.io/pause:3.3: (8.7188019s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cache add registry.k8s.io/pause:latest: (8.8129822s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-869300 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3960938515\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-869300 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3960938515\001: (2.4631811s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cache add minikube-local-cache-test:functional-869300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cache add minikube-local-cache-test:functional-869300: (8.416265s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cache delete minikube-local-cache-test:functional-869300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-869300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh sudo crictl images: (9.5167302s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.5293005s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.5207488s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:34:46.264920    7744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cache reload: (8.2426521s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.5119527s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.59s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 kubectl -- --context functional-869300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (127.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-869300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0501 02:36:34.953066   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-869300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m7.1073085s)
functional_test.go:757: restart took 2m7.1080022s for "functional-869300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (127.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-869300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 logs
E0501 02:37:58.177248   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 logs: (8.7723145s)
--- PASS: TestFunctional/serial/LogsCmd (8.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd99483469\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd99483469\001\logs.txt: (10.9019449s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-869300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-869300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-869300: exit status 115 (16.8032836s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.28.218.182:32253 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:38:19.577344    3936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-869300 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-869300 delete -f testdata\invalidsvc.yaml: (1.2992061s)
--- PASS: TestFunctional/serial/InvalidService (21.58s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 status: (14.3556845s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.8589067s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 status -o json: (14.3932321s)
--- PASS: TestFunctional/parallel/StatusCmd (43.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-869300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-869300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-26dft" [f0b16241-6631-4453-9373-967bc4822dab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-26dft" [f0b16241-6631-4453-9373-967bc4822dab] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0211476s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 service hello-node-connect --url: (21.9136928s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.28.218.182:31913
functional_test.go:1671: http://172.28.218.182:31913: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-26dft

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.28.218.182:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.28.218.182:31913
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (30.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3400f4a7-b325-4236-a464-0c0c871fd3b7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0165343s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-869300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-869300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-869300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-869300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [857e7429-3116-4620-8404-5be29cc2de76] Pending
helpers_test.go:344: "sp-pod" [857e7429-3116-4620-8404-5be29cc2de76] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [857e7429-3116-4620-8404-5be29cc2de76] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.0178179s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-869300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-869300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-869300 delete -f testdata/storage-provisioner/pod.yaml: (1.2039774s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-869300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f22c7d1-42b6-471b-8796-eac9fb66f917] Pending
helpers_test.go:344: "sp-pod" [3f22c7d1-42b6-471b-8796-eac9fb66f917] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f22c7d1-42b6-471b-8796-eac9fb66f917] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0207769s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-869300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (24.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "echo hello": (12.303631s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "cat /etc/hostname": (11.8096042s)
--- PASS: TestFunctional/parallel/SSHCmd (24.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (59.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.5621196s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh -n functional-869300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh -n functional-869300 "sudo cat /home/docker/cp-test.txt": (10.5837047s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cp functional-869300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd202172203\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cp functional-869300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd202172203\001\cp-test.txt: (10.4018525s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh -n functional-869300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh -n functional-869300 "sudo cat /home/docker/cp-test.txt": (10.4597385s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.9819799s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh -n functional-869300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh -n functional-869300 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.9743692s)
--- PASS: TestFunctional/parallel/CpCmd (59.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (67.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-869300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-gl5bz" [2f7134da-c345-45d3-87a4-354d34fa1120] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-gl5bz" [2f7134da-c345-45d3-87a4-354d34fa1120] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0187433s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;": exit status 1 (317.6458ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;": exit status 1 (323.8592ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;": exit status 1 (310.1911ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;": exit status 1 (329.8155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;": exit status 1 (270.65ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-869300 exec mysql-64454c8b5c-gl5bz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (67.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14288/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/test/nested/copy/14288/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/test/nested/copy/14288/hosts": (10.0539146s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (64.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14288.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/14288.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/14288.pem": (11.2638627s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14288.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /usr/share/ca-certificates/14288.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /usr/share/ca-certificates/14288.pem": (10.5909163s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.2998224s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/142882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/142882.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/142882.pem": (10.8954911s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/142882.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /usr/share/ca-certificates/142882.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /usr/share/ca-certificates/142882.pem": (10.696s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0501 02:41:34.948063   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.5889107s)
--- PASS: TestFunctional/parallel/CertSync (64.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-869300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 ssh "sudo systemctl is-active crio": exit status 1 (11.5902552s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:38:39.548053    9236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.63321s)
--- PASS: TestFunctional/parallel/License (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-869300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-869300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-snp7z" [81a77501-22c3-4dbc-a3d4-f44d07d5d86e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-snp7z" [81a77501-22c3-4dbc-a3d4-f44d07d5d86e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.008156s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 version -o=json --components: (8.4558157s)
--- PASS: TestFunctional/parallel/Version/components (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls --format short --alsologtostderr: (7.9704418s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-869300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-869300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-869300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-869300 image ls --format short --alsologtostderr:
W0501 02:41:44.990911    1932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0501 02:41:45.077305    1932 out.go:291] Setting OutFile to fd 940 ...
I0501 02:41:45.078304    1932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:41:45.078304    1932 out.go:304] Setting ErrFile to fd 740...
I0501 02:41:45.078304    1932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:41:45.096060    1932 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:41:45.096060    1932 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:41:45.097746    1932 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:41:47.376219    1932 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:41:47.376266    1932 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:47.393608    1932 ssh_runner.go:195] Run: systemctl --version
I0501 02:41:47.393608    1932 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:41:49.583886    1932 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:41:49.583886    1932 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:49.583886    1932 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
I0501 02:41:52.320182    1932 main.go:141] libmachine: [stdout =====>] : 172.28.218.182

                                                
                                                
I0501 02:41:52.320182    1932 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:52.320182    1932 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
I0501 02:41:52.432353    1932 ssh_runner.go:235] Completed: systemctl --version: (5.0386281s)
I0501 02:41:52.442892    1932 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls --format table --alsologtostderr: (7.4798887s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-869300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-869300 | d54c333940122 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/google-containers/addon-resizer      | functional-869300 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-869300 image ls --format table --alsologtostderr:
W0501 02:42:02.207761     748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0501 02:42:02.294401     748 out.go:291] Setting OutFile to fd 964 ...
I0501 02:42:02.296113     748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:42:02.296113     748 out.go:304] Setting ErrFile to fd 1000...
I0501 02:42:02.296481     748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:42:02.314586     748 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:42:02.314867     748 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:42:02.315589     748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:42:04.522139     748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:42:04.522139     748 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:04.537428     748 ssh_runner.go:195] Run: systemctl --version
I0501 02:42:04.537428     748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:42:06.738957     748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:42:06.738957     748 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:06.739395     748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
I0501 02:42:09.371562     748 main.go:141] libmachine: [stdout =====>] : 172.28.218.182

                                                
                                                
I0501 02:42:09.371562     748 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:09.371562     748 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
I0501 02:42:09.471216     748 ssh_runner.go:235] Completed: systemctl --version: (4.9337515s)
I0501 02:42:09.482239     748 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls --format json --alsologtostderr: (7.4051954s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-869300 image ls --format json --alsologtostderr:
[{"id":"d54c33394012204bd114cec24b1c71bf8a7813409501d44e3efb3b677b8b5002","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-869300"],"size":"30"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb9
4e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-869300"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry
.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-869300 image ls --format json --alsologtostderr:
W0501 02:41:54.791416    7912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0501 02:41:54.876021    7912 out.go:291] Setting OutFile to fd 644 ...
I0501 02:41:54.876021    7912 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:41:54.876021    7912 out.go:304] Setting ErrFile to fd 760...
I0501 02:41:54.876021    7912 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:41:54.894371    7912 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:41:54.894871    7912 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:41:54.895313    7912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:41:57.047488    7912 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:41:57.047488    7912 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:57.062457    7912 ssh_runner.go:195] Run: systemctl --version
I0501 02:41:57.062457    7912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:41:59.271593    7912 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:41:59.271593    7912 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:59.272443    7912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
I0501 02:42:01.885392    7912 main.go:141] libmachine: [stdout =====>] : 172.28.218.182

                                                
                                                
I0501 02:42:01.885392    7912 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:01.886017    7912 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
I0501 02:42:01.993966    7912 ssh_runner.go:235] Completed: systemctl --version: (4.9314727s)
I0501 02:42:02.005281    7912 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls --format yaml --alsologtostderr: (7.6684735s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-869300 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-869300
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: d54c33394012204bd114cec24b1c71bf8a7813409501d44e3efb3b677b8b5002
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-869300
size: "30"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-869300 image ls --format yaml --alsologtostderr:
W0501 02:41:47.114235    6284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0501 02:41:47.206017    6284 out.go:291] Setting OutFile to fd 704 ...
I0501 02:41:47.223911    6284 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:41:47.223911    6284 out.go:304] Setting ErrFile to fd 988...
I0501 02:41:47.223911    6284 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:41:47.239922    6284 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:41:47.240926    6284 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:41:47.240926    6284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:41:49.506567    6284 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:41:49.506567    6284 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:49.520576    6284 ssh_runner.go:195] Run: systemctl --version
I0501 02:41:49.520576    6284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:41:51.790516    6284 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:41:51.790585    6284 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:51.790773    6284 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
I0501 02:41:54.493209    6284 main.go:141] libmachine: [stdout =====>] : 172.28.218.182

                                                
                                                
I0501 02:41:54.493458    6284 main.go:141] libmachine: [stderr =====>] : 
I0501 02:41:54.494453    6284 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
I0501 02:41:54.591575    6284 ssh_runner.go:235] Completed: systemctl --version: (5.070799s)
I0501 02:41:54.603478    6284 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (29.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-869300 ssh pgrep buildkitd: exit status 1 (9.6742432s)

                                                
                                                
** stderr ** 
	W0501 02:41:52.959971    7760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image build -t localhost/my-image:functional-869300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image build -t localhost/my-image:functional-869300 testdata\build --alsologtostderr: (12.4966963s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-869300 image build -t localhost/my-image:functional-869300 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d1b68949dffa
---> Removed intermediate container d1b68949dffa
---> 762aace4384c
Step 3/3 : ADD content.txt /
---> f86e90e28c76
Successfully built f86e90e28c76
Successfully tagged localhost/my-image:functional-869300
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-869300 image build -t localhost/my-image:functional-869300 testdata\build --alsologtostderr:
W0501 02:42:02.629281   14168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0501 02:42:02.715609   14168 out.go:291] Setting OutFile to fd 964 ...
I0501 02:42:02.738266   14168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:42:02.738266   14168 out.go:304] Setting ErrFile to fd 1000...
I0501 02:42:02.738266   14168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:42:02.760313   14168 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:42:02.783342   14168 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0501 02:42:02.784739   14168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:42:04.947423   14168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:42:04.947570   14168 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:04.968515   14168 ssh_runner.go:195] Run: systemctl --version
I0501 02:42:04.968515   14168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-869300 ).state
I0501 02:42:07.183495   14168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0501 02:42:07.183495   14168 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:07.183495   14168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-869300 ).networkadapters[0]).ipaddresses[0]
I0501 02:42:09.812791   14168 main.go:141] libmachine: [stdout =====>] : 172.28.218.182

                                                
                                                
I0501 02:42:09.812791   14168 main.go:141] libmachine: [stderr =====>] : 
I0501 02:42:09.812791   14168 sshutil.go:53] new ssh client: &{IP:172.28.218.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-869300\id_rsa Username:docker}
I0501 02:42:09.913944   14168 ssh_runner.go:235] Completed: systemctl --version: (4.9453921s)
I0501 02:42:09.914043   14168 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1444742300.tar
I0501 02:42:09.928834   14168 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0501 02:42:09.963070   14168 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1444742300.tar
I0501 02:42:09.971896   14168 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1444742300.tar: stat -c "%s %y" /var/lib/minikube/build/build.1444742300.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1444742300.tar': No such file or directory
I0501 02:42:09.972064   14168 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1444742300.tar --> /var/lib/minikube/build/build.1444742300.tar (3072 bytes)
I0501 02:42:10.035419   14168 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1444742300
I0501 02:42:10.083544   14168 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1444742300 -xf /var/lib/minikube/build/build.1444742300.tar
I0501 02:42:10.110559   14168 docker.go:360] Building image: /var/lib/minikube/build/build.1444742300
I0501 02:42:10.121979   14168 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-869300 /var/lib/minikube/build/build.1444742300
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0501 02:42:14.902192   14168 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-869300 /var/lib/minikube/build/build.1444742300: (4.7801788s)
I0501 02:42:14.918139   14168 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1444742300
I0501 02:42:14.953760   14168 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1444742300.tar
I0501 02:42:14.972096   14168 build_images.go:217] Built localhost/my-image:functional-869300 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.1444742300.tar
I0501 02:42:14.972096   14168 build_images.go:133] succeeded building to: functional-869300
I0501 02:42:14.972096   14168 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls: (7.3755994s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (29.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.4696224s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-869300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image load --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image load --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr: (16.4492411s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls: (7.9646695s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (13.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 service list: (13.5908896s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (13.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 service list -o json: (13.4478429s)
functional_test.go:1490: Took "13.4480575s" to run "out/minikube-windows-amd64.exe -p functional-869300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image load --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image load --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr: (13.2502648s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls: (7.9101088s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (30.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.0567992s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-869300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image load --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image load --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr: (18.0939788s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls: (8.4406546s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (30.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-869300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-869300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-869300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7716: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 5588: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-869300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-869300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-869300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1f725623-e9ed-435d-bf83-41a04da11639] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1f725623-e9ed-435d-bf83-41a04da11639] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0226224s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image save gcr.io/google-containers/addon-resizer:functional-869300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image save gcr.io/google-containers/addon-resizer:functional-869300 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.6012945s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-869300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 5308: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.8158837s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image rm gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image rm gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr: (9.013853s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls: (8.5736974s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.7811305s)
functional_test.go:1311: Took "11.7812881s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "348.9257ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.6468287s)
functional_test.go:1362: Took "11.6469867s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "273.1719ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (20.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.5026695s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image ls: (8.5664539s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (20.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (48.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-869300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-869300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-869300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-869300": (31.0665131s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-869300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-869300 docker-env | Invoke-Expression ; docker images": (17.1112683s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (48.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 update-context --alsologtostderr -v=2: (2.939989s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 update-context --alsologtostderr -v=2: (2.5021991s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 update-context --alsologtostderr -v=2: (2.4707889s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-869300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-869300 image save --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-869300 image save --daemon gcr.io/google-containers/addon-resizer:functional-869300 --alsologtostderr: (10.0223766s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-869300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.5s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-869300
--- PASS: TestFunctional/delete_addon-resizer_images (0.50s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-869300
--- PASS: TestFunctional/delete_my-image_image (0.18s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-869300
--- PASS: TestFunctional/delete_minikube_cached_images (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (719.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-136200 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0501 02:48:37.994562   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.009156   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.024412   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.056369   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.103816   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.198310   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.374970   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:38.709871   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:39.362044   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:40.650250   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:43.216653   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:48.348212   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:48:58.589410   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:49:19.071580   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:50:00.038895   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:51:21.967703   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:51:34.964527   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:53:37.994124   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:54:05.822078   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 02:54:38.189363   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:56:34.966630   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 02:58:37.994616   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-136200 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m22.2822259s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-136200 status -v=7 --alsologtostderr: (36.7725465s)
--- PASS: TestMultiControlPlane/serial/StartCluster (719.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-136200 -- rollout status deployment/busybox: (3.7172097s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- nslookup kubernetes.io: (2.0304205s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-6mlkh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-pc6wt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-6mlkh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-pc6wt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-2gr4g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-6mlkh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-136200 -- exec busybox-fc5497c4f-pc6wt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-136200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.0176225s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (21.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (21.2997402s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (21.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (28.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (28.7789397s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (28.78s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (200.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-222200 --driver=hyperv
E0501 03:21:34.966909   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:21:41.215166   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 03:23:38.006854   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-222200 --driver=hyperv: (3m20.0592988s)
--- PASS: TestImageBuild/serial/Setup (200.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-222200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-222200: (9.8093344s)
--- PASS: TestImageBuild/serial/NormalBuild (9.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-222200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-222200: (9.1215869s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-222200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-222200: (7.7953375s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-222200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-222200: (7.616166s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (243.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-175300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0501 03:26:34.978759   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:27:58.224692   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:28:38.003683   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-175300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m3.6842751s)
--- PASS: TestJSONOutput/start/Command (243.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.01s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-175300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-175300 --output=json --user=testUser: (8.0123265s)
--- PASS: TestJSONOutput/pause/Command (8.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.93s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-175300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-175300 --output=json --user=testUser: (7.9268028s)
--- PASS: TestJSONOutput/unpause/Command (7.93s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (40.52s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-175300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-175300 --output=json --user=testUser: (40.5179866s)
--- PASS: TestJSONOutput/stop/Command (40.52s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-877800 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-877800 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (320.2087ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"246f1d0e-8f56-436e-a359-991f0fa1dc83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-877800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"17eacb25-4242-4fd5-b2ed-66bf250c6f88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"b1e1664c-6b9d-4287-9d6b-e8de4c2e40cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6b57c100-8843-4e5e-9862-3820d70e8c24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"bf41543f-a473-4fa1-b861-9c10db53a4c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18779"}}
	{"specversion":"1.0","id":"64c77669-76aa-4629-be74-5cb46d7711a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"268bec48-ff96-4a6e-af1c-04fdaf52556c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 03:30:59.693825    4884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-877800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-877800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-877800: (1.2493436s)
--- PASS: TestErrorJSONOutput (1.57s)

                                                
                                    
x
+
TestMainNoArgs (0.29s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.29s)

                                                
                                    
x
+
TestMinikubeProfile (529.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-010200 --driver=hyperv
E0501 03:31:34.970999   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:33:38.018178   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-010200 --driver=hyperv: (3m19.4902795s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-010200 --driver=hyperv
E0501 03:36:34.982074   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-010200 --driver=hyperv: (3m22.7413192s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-010200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.2351492s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-010200
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0501 03:38:21.234592   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.2903651s)
helpers_test.go:175: Cleaning up "second-010200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-010200
E0501 03:38:38.017474   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-010200: (46.1039471s)
helpers_test.go:175: Cleaning up "first-010200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-010200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-010200: (41.5260918s)
--- PASS: TestMinikubeProfile (529.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (156.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-694500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0501 03:41:34.985094   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-694500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m35.3785725s)
--- PASS: TestMountStart/serial/StartWithMountFirst (156.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.59s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-694500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-694500 ssh -- ls /minikube-host: (9.5930649s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (157.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-694500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0501 03:43:38.013265   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 03:44:38.237688   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-694500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m36.7471436s)
--- PASS: TestMountStart/serial/StartWithMountSecond (157.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.67s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-694500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-694500 ssh -- ls /minikube-host: (9.6723296s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.67s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (27.56s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-694500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-694500 --alsologtostderr -v=5: (27.5625445s)
--- PASS: TestMountStart/serial/DeleteFirst (27.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-694500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-694500 ssh -- ls /minikube-host: (9.5518377s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.55s)

                                                
                                    
x
+
TestMountStart/serial/Stop (30.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-694500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-694500: (30.2887156s)
--- PASS: TestMountStart/serial/Stop (30.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (118.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-694500
E0501 03:46:34.985102   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-694500: (1m57.390188s)
--- PASS: TestMountStart/serial/RestartStopped (118.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-694500 ssh -- ls /minikube-host
E0501 03:48:38.018076   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-694500 ssh -- ls /minikube-host: (9.406889s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (428.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-289800 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0501 03:51:34.983106   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 03:53:38.025494   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 03:55:01.254608   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-289800 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m45.1581502s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 status --alsologtostderr: (23.7793202s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (428.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- rollout status deployment/busybox: (2.5894735s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- nslookup kubernetes.io: (2.0068941s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-tbxxx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-tbxxx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-cc6mk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-289800 -- exec busybox-fc5497c4f-tbxxx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (231.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-289800 -v 3 --alsologtostderr
E0501 03:58:38.027332   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-289800 -v 3 --alsologtostderr: (3m16.1177057s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 status --alsologtostderr: (35.6124937s)
--- PASS: TestMultiNode/serial/AddNode (231.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-289800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0501 04:01:18.245998   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.8359671s)
--- PASS: TestMultiNode/serial/ProfileList (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (362s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 status --output json --alsologtostderr
E0501 04:01:34.990164   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 status --output json --alsologtostderr: (35.7368058s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp testdata\cp-test.txt multinode-289800:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp testdata\cp-test.txt multinode-289800:/home/docker/cp-test.txt: (9.4836331s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt": (9.4405561s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800.txt: (9.4930861s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt": (9.3871271s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800:/home/docker/cp-test.txt multinode-289800-m02:/home/docker/cp-test_multinode-289800_multinode-289800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800:/home/docker/cp-test.txt multinode-289800-m02:/home/docker/cp-test_multinode-289800_multinode-289800-m02.txt: (16.4895013s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt": (9.4439085s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test_multinode-289800_multinode-289800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test_multinode-289800_multinode-289800-m02.txt": (9.5135839s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800:/home/docker/cp-test.txt multinode-289800-m03:/home/docker/cp-test_multinode-289800_multinode-289800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800:/home/docker/cp-test.txt multinode-289800-m03:/home/docker/cp-test_multinode-289800_multinode-289800-m03.txt: (16.3674866s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt"
E0501 04:03:38.034656   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test.txt": (9.4262696s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test_multinode-289800_multinode-289800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test_multinode-289800_multinode-289800-m03.txt": (9.414225s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp testdata\cp-test.txt multinode-289800-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp testdata\cp-test.txt multinode-289800-m02:/home/docker/cp-test.txt: (9.5419878s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt": (9.7732986s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800-m02.txt: (9.5010787s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt": (9.4428962s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt multinode-289800:/home/docker/cp-test_multinode-289800-m02_multinode-289800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt multinode-289800:/home/docker/cp-test_multinode-289800-m02_multinode-289800.txt: (16.4057713s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt": (9.4713767s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test_multinode-289800-m02_multinode-289800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test_multinode-289800-m02_multinode-289800.txt": (9.4066738s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt multinode-289800-m03:/home/docker/cp-test_multinode-289800-m02_multinode-289800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m02:/home/docker/cp-test.txt multinode-289800-m03:/home/docker/cp-test_multinode-289800-m02_multinode-289800-m03.txt: (16.364923s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test.txt": (9.5160074s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test_multinode-289800-m02_multinode-289800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test_multinode-289800-m02_multinode-289800-m03.txt": (9.3695073s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp testdata\cp-test.txt multinode-289800-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp testdata\cp-test.txt multinode-289800-m03:/home/docker/cp-test.txt: (9.4623967s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt": (9.5622962s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4254052504\001\cp-test_multinode-289800-m03.txt: (9.3810269s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt": (9.4124908s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt multinode-289800:/home/docker/cp-test_multinode-289800-m03_multinode-289800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt multinode-289800:/home/docker/cp-test_multinode-289800-m03_multinode-289800.txt: (16.567007s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt"
E0501 04:06:34.999222   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt": (9.4584641s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test_multinode-289800-m03_multinode-289800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800 "sudo cat /home/docker/cp-test_multinode-289800-m03_multinode-289800.txt": (9.6343713s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt multinode-289800-m02:/home/docker/cp-test_multinode-289800-m03_multinode-289800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 cp multinode-289800-m03:/home/docker/cp-test.txt multinode-289800-m02:/home/docker/cp-test_multinode-289800-m03_multinode-289800-m02.txt: (16.5880768s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m03 "sudo cat /home/docker/cp-test.txt": (9.4637651s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test_multinode-289800-m03_multinode-289800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 ssh -n multinode-289800-m02 "sudo cat /home/docker/cp-test_multinode-289800-m03_multinode-289800-m02.txt": (9.4571879s)
--- PASS: TestMultiNode/serial/CopyFile (362.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (77.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 node stop m03: (25.2064529s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-289800 status: exit status 7 (26.02495s)

                                                
                                                
-- stdout --
	multinode-289800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-289800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-289800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:07:52.143883    7060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 status --alsologtostderr
E0501 04:08:38.023131   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-289800 status --alsologtostderr: exit status 7 (26.2107644s)

                                                
                                                
-- stdout --
	multinode-289800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-289800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-289800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:08:18.161475    6560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 04:08:18.252616    6560 out.go:291] Setting OutFile to fd 940 ...
	I0501 04:08:18.253621    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:08:18.253621    6560 out.go:304] Setting ErrFile to fd 816...
	I0501 04:08:18.253621    6560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 04:08:18.269003    6560 out.go:298] Setting JSON to false
	I0501 04:08:18.269076    6560 mustload.go:65] Loading cluster: multinode-289800
	I0501 04:08:18.269258    6560 notify.go:220] Checking for updates...
	I0501 04:08:18.269995    6560 config.go:182] Loaded profile config "multinode-289800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 04:08:18.270053    6560 status.go:255] checking status of multinode-289800 ...
	I0501 04:08:18.271169    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:08:20.469498    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:08:20.469580    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:20.469580    6560 status.go:330] multinode-289800 host status = "Running" (err=<nil>)
	I0501 04:08:20.469580    6560 host.go:66] Checking if "multinode-289800" exists ...
	I0501 04:08:20.470326    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:08:22.640717    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:08:22.640717    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:22.640717    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:08:25.263490    6560 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 04:08:25.263490    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:25.263490    6560 host.go:66] Checking if "multinode-289800" exists ...
	I0501 04:08:25.278904    6560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 04:08:25.278904    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800 ).state
	I0501 04:08:27.409382    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:08:27.409382    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:27.409382    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800 ).networkadapters[0]).ipaddresses[0]
	I0501 04:08:30.006016    6560 main.go:141] libmachine: [stdout =====>] : 172.28.209.152
	
	I0501 04:08:30.006814    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:30.007016    6560 sshutil.go:53] new ssh client: &{IP:172.28.209.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800\id_rsa Username:docker}
	I0501 04:08:30.110019    6560 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8310784s)
	I0501 04:08:30.127619    6560 ssh_runner.go:195] Run: systemctl --version
	I0501 04:08:30.153058    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 04:08:30.183646    6560 kubeconfig.go:125] found "multinode-289800" server: "https://172.28.209.152:8443"
	I0501 04:08:30.183801    6560 api_server.go:166] Checking apiserver status ...
	I0501 04:08:30.200771    6560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 04:08:30.245491    6560 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup
	W0501 04:08:30.266100    6560 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2011/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 04:08:30.283037    6560 ssh_runner.go:195] Run: ls
	I0501 04:08:30.291841    6560 api_server.go:253] Checking apiserver healthz at https://172.28.209.152:8443/healthz ...
	I0501 04:08:30.299474    6560 api_server.go:279] https://172.28.209.152:8443/healthz returned 200:
	ok
	I0501 04:08:30.299474    6560 status.go:422] multinode-289800 apiserver status = Running (err=<nil>)
	I0501 04:08:30.299474    6560 status.go:257] multinode-289800 status: &{Name:multinode-289800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 04:08:30.299474    6560 status.go:255] checking status of multinode-289800-m02 ...
	I0501 04:08:30.300502    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:08:32.450118    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:08:32.450118    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:32.450118    6560 status.go:330] multinode-289800-m02 host status = "Running" (err=<nil>)
	I0501 04:08:32.450118    6560 host.go:66] Checking if "multinode-289800-m02" exists ...
	I0501 04:08:32.451274    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:08:34.608475    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:08:34.608475    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:34.609576    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:08:37.222839    6560 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 04:08:37.222839    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:37.223417    6560 host.go:66] Checking if "multinode-289800-m02" exists ...
	I0501 04:08:37.238489    6560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 04:08:37.238489    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m02 ).state
	I0501 04:08:39.372039    6560 main.go:141] libmachine: [stdout =====>] : Running
	
	I0501 04:08:39.372039    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:39.372797    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-289800-m02 ).networkadapters[0]).ipaddresses[0]
	I0501 04:08:41.929302    6560 main.go:141] libmachine: [stdout =====>] : 172.28.219.162
	
	I0501 04:08:41.929302    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:41.929919    6560 sshutil.go:53] new ssh client: &{IP:172.28.219.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-289800-m02\id_rsa Username:docker}
	I0501 04:08:42.043964    6560 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8047381s)
	I0501 04:08:42.057598    6560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 04:08:42.085095    6560 status.go:257] multinode-289800-m02 status: &{Name:multinode-289800-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0501 04:08:42.085095    6560 status.go:255] checking status of multinode-289800-m03 ...
	I0501 04:08:42.085844    6560 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-289800-m03 ).state
	I0501 04:08:44.214616    6560 main.go:141] libmachine: [stdout =====>] : Off
	
	I0501 04:08:44.214616    6560 main.go:141] libmachine: [stderr =====>] : 
	I0501 04:08:44.215233    6560 status.go:330] multinode-289800-m03 host status = "Stopped" (err=<nil>)
	I0501 04:08:44.215233    6560 status.go:343] host is not running, skipping remaining checks
	I0501 04:08:44.215233    6560 status.go:257] multinode-289800-m03 status: &{Name:multinode-289800-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (77.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (185.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 node start m03 -v=7 --alsologtostderr: (2m29.8294792s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-289800 status -v=7 --alsologtostderr
E0501 04:11:35.002294   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
E0501 04:11:41.273618   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-289800 status -v=7 --alsologtostderr: (35.8251079s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (185.84s)

                                                
                                    
x
+
TestScheduledStopWindows (331.39s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-238800 --memory=2048 --driver=hyperv
E0501 04:33:38.039067   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-869300\client.crt: The system cannot find the path specified.
E0501 04:34:38.276480   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-238800 --memory=2048 --driver=hyperv: (3m18.209176s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-238800 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-238800 --schedule 5m: (10.8117297s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-238800 -n scheduled-stop-238800
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-238800 -n scheduled-stop-238800: exit status 1 (10.020189s)

                                                
                                                
** stderr ** 
	W0501 04:35:30.936370   13960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-238800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-238800 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.6350976s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-238800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-238800 --schedule 5s: (10.7852246s)
E0501 04:36:35.012933   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-238800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-238800: exit status 7 (2.3917011s)

                                                
                                                
-- stdout --
	scheduled-stop-238800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:37:01.395293    3916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-238800 -n scheduled-stop-238800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-238800 -n scheduled-stop-238800: exit status 7 (2.3655185s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 04:37:03.780305    1496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-238800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-238800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-238800: (27.1472447s)
--- PASS: TestScheduledStopWindows (331.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (973.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2649083491.exe start -p stopped-upgrade-120700 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2649083491.exe start -p stopped-upgrade-120700 --memory=2200 --vm-driver=hyperv: (8m5.7999239s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2649083491.exe -p stopped-upgrade-120700 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2649083491.exe -p stopped-upgrade-120700 stop: (36.4348099s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-120700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0501 04:46:35.004760   14288 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-286100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-120700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m31.5589097s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (973.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-120700
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-120700: (9.7705682s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.77s)

                                                
                                    

Test skip (30/201)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-869300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-869300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 4384: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-869300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-869300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0448777s)

                                                
                                                
-- stdout --
	* [functional-869300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:41:01.977315    5588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 02:41:02.070443    5588 out.go:291] Setting OutFile to fd 816 ...
	I0501 02:41:02.071433    5588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:02.071433    5588 out.go:304] Setting ErrFile to fd 736...
	I0501 02:41:02.071433    5588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:02.103468    5588 out.go:298] Setting JSON to false
	I0501 02:41:02.110464    5588 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104316,"bootTime":1714426945,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:41:02.110464    5588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:41:02.114434    5588 out.go:177] * [functional-869300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:41:02.121443    5588 notify.go:220] Checking for updates...
	I0501 02:41:02.124444    5588 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:41:02.128444    5588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:41:02.130444    5588 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:41:02.134442    5588 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:41:02.137444    5588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:41:02.143441    5588 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:41:02.144445    5588 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-869300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-869300 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0325168s)

                                                
                                                
-- stdout --
	* [functional-869300] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0501 02:41:07.055367    1720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0501 02:41:07.158377    1720 out.go:291] Setting OutFile to fd 984 ...
	I0501 02:41:07.159373    1720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:07.159373    1720 out.go:304] Setting ErrFile to fd 760...
	I0501 02:41:07.159373    1720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:41:07.190710    1720 out.go:298] Setting JSON to false
	I0501 02:41:07.196686    1720 start.go:129] hostinfo: {"hostname":"minikube6","uptime":104321,"bootTime":1714426945,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0501 02:41:07.196686    1720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0501 02:41:07.199688    1720 out.go:177] * [functional-869300] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0501 02:41:07.205789    1720 notify.go:220] Checking for updates...
	I0501 02:41:07.208214    1720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0501 02:41:07.210809    1720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:41:07.213867    1720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0501 02:41:07.215609    1720 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:41:07.218615    1720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:41:07.221667    1720 config.go:182] Loaded profile config "functional-869300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0501 02:41:07.223610    1720 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard